State Nullification by Memoryless Output Feedback

DOI: 10.1007/s00498-004-0144-1
Springer-Verlag London Ltd. © 2004
Math. Control Signals Systems (2005) 17: 38–56
State Nullification by Memoryless Output Feedback∗
Zvi Artstein† and Gera Weiss‡
Abstract. We examine linear single-input single-output finite-dimensional systems. It is shown that a continuous time controllable and observable system can
be nullified utilizing periodic sampling of the output with time-varying linear feedback. Almost any sampling rate can be used. The result relies on a characterization
of linear output feedback nullification of discrete time observable and controllable
systems. An algorithm for the nullification and an estimate on the time in which
the algorithm is concluded are provided.
Key words. Output feedback, Static feedback, Nullification, Hold function,
Sampled data.
1.
Introduction and the Main Results
Nullifying the state, namely, driving the state to the origin in finite time, can be
viewed as a sharp form of stabilization (desirable at times due to precision requirements); it is also of interest for its own sake as a structural problem. In this paper
we examine the issue of nullifying a linear system, thus, nullification in a given
time interval amounts to placing the poles of the associated transformation at the
origin. We are interested in utilizing linear output feedback which is memoryless,
namely, time-varying static feedback. While efficient algorithms for pole placement
invoking dynamic feedback are available, the issue of stabilization with memoryless feedback has been recognized as a subtle problem and was recorded as such
in Brockett [3]. The problem has drawn attention in the literature. Some papers,
those most relevant to the subject matter of the present contribution, are listed as
references below and discussed in the body of the text. In particular, the nullification result concerning discrete-time systems displayed in the present paper fills
a gap in Aeyels and Willems [2] where a possibility of pole assignment is offered,
provided that the poles are not zero. Theorem D below provides a characterization
of the possibility to nullify a controllable and observable discrete-time linear system
∗
Date received: September 26, 2003. Date revised: April 15, 2004. Published online: 7 October 2004
Research supported by grants from the Israel Science Foundation and from the Information Society
Technologies Programme of the European Commission.
† Department of Mathematics, The Weizmann Institute of Science, Rehovot, 76100, Israel.
[email protected]. (Incumbent of the Hettie H. Heineman Professorial Chair in
Mathematics).
‡ Department of Computer Science and Applied Mathematics, The Weizmann Institute of Science,
Rehovot, 76100, Israel. [email protected].
State Nullification by Memoryless Output Feedback
with a static linear output feedback. The proof we display is intricate, but it results
in a simple algorithm for both checking the condition and nullifying the system; we
also provide an estimate for the number of steps in which the algorithm is concluded.
The result in the discrete-time framework is then used to analyze the possibility to nullify a continuous-time system. Our approach pertains to the analysis of
sampled-data hold functions, also referred to as deadbeat controls; see the analysis
of Kabamba [5] where a general result concerning generalized sampled-data hold
functions is presented. Theorem S below verifies that for all but a discrete set of sample periods the sufficient conditions established in the analysis of the discrete-time
case are satisfied.
In both the discrete-time case and the continuous-time framework we address single-input single-output systems (even in this case, as mentioned, the proof is quite
intricate).
The main results are as follows.
The discrete-time systems we examine are finite-dimensional linear systems whose
dynamics is generated by the equations of the form
xj +1 = Axj + buj
yj +1 = cxj +1
(1.1)
where A is an n × n matrix, b an n-dimensional column vector and c an n-dimensional row vector. In particular, the control u is scalar and the system is single-input
single-output. When feedback from the output is invoked, the control is of the
form uj = u(cxj ). In particular, when the feedback from the output is linear and
memoryless, the dynamics has the form
xj +1 = Axj + kj bcxj .
(1.2)
The main problem we examine is the possibility of nullifying the system while utilizing a memoryless linear feedback from the output. Our technique yields some new
information about the intimately related property of stabilizing the system while
utilizing a linear feedback from the output. We refer to these problems as output
nullification and output stabilization (they should not be confused with nullifying
and stabilizing the output). The nullifiability is formalized as follows.
Definition 1.1. The system (1.1) is memoryless linear output feedback nullifiable
if a finite sequence k0 , k1 , . . . , kj0 exists such that for any initial condition x0 given
at time t = 0 the sequence x1 , x2 , . . . , resulting from the dynamics (1.2) satisfies
xj0 = 0; equivalently, (A + kj0 bc) · · · (A + k1 bc)(A + k0 bc) = 0.
The definition of linear output feedback stabilization is analogous; for instance,
one may require the existence of k0 , k1 , . . . , kj0 such that all the eigenvalues of the
matrix (A + kj0 bc) · · · (A + k1 bc)(A + k0 bc) have absolute value less than 1.
In order to formulate our main result we recall two notions. First recall the controller canonical form of the system (see e.g. Sontag [11, Definition 4.1.5]), namely,
if the system (1.1) is controllable then after a similarity change of variables the data
have the form
Z. Artstein and G. Weiss

0
0


A =  ...

0
α1

1 0 ··· 0
0 1 ··· 0 


,

0 ··· 1 
α2 · · · α n
 
0
0
 
 
b =  ...  ,
 
0
1
c = (γ1 , γ2 , . . . , γn ).
(1.3)
Secondly, recall the notion of the adjugate, or adjoint, of an n × n matrix denoted
by Adj(A). It is the n × n matrix where the (i, j ) entry is the cofactor of aj,i in the
determinant of A, namely, the (i, j ) entry is (−1)i+j times the determinant of the
sub-matrix obtained by deleting the j -th row and the i-th column from A (see e.g.
Hohn [4, pages 56, 85]). We refer to the term cAdj(A)b in the statement of the main
result below. A direct inspection reveals that when the system is in the canonical form
then cAdj(A)b = (−1)n+1 γ1 . We also refer to the determinant Det(A) of the matrix
A. Notice that when the system is in the canonical form then Det(A) = (−1)n+1 α1 .
The main result concerning discrete-time systems is as follows.
Theorem D. Suppose that the system (1.1) is controllable and observable. A necessary and sufficient condition for linear output feedback nullification is cAdj(A)b = 0
(which amounts to γ1 = 0 when the system is in the controller canonical form).
We observe that it is enough to verify the result when the system is in the canonical
form (1.3). Indeed, the expressions cAdj(A)b is an invariant of similarity transformations. In particular, the value of cAdj(A)b is equal to (−1)n+1 γ1 computed in the
canonical form. Also, the result could be stated in terms of the transfer function
of the system. Indeed, the number γ1 is the constant term in the numerator of the
transfer function c(I z − A)−1 b of the system.
See Remark 2.3 on the relation of the conditions in the previous theorem to
stabilization.
The continuous-time systems we examine are of the form
dx
= Ax + bu
dt
y = cx
(1.4)
with A, b and c as before. The sampled control strategy we consider allows feedback
from the output at prescribed sampling times and holds the control constant in periods between samplings. Suppose that the sampling period is of step size h > 0. For
the discrete sequence of times where the samplings occur, say tj = j h, j = 0, 1, . . . ,
one gets a discrete time system of the form (1.1), say,
xj +1 = Ah xj + bh uj
yj +1 = cxj +1
(1.5)
where the data are given by
h
Ah = e
hA
, bh =
e(h−s)A bds.
0
See Sontag [11, Section 3.4].
(1.6)
State Nullification by Memoryless Output Feedback
Definition 1.2. We say that the continuous system (1.4) is h-sample linear output
feedback nullifiable if the corresponding discrete time system (1.5) is linear output
feedback nullifiable.
The main result concerning sampled data systems refers to a discrete set of sampling periods. By a discrete set we mean a set whose intersection with any bounded
interval is finite. The result is as follows.
Theorem S. Suppose that the system (1.4) is controllable and observable. The system is h-sample linear output feedback nullifiable except for a discrete set of sampling
periods h.
The rest of the paper is organized as follows. The proof of the necessity in Theorem D is given in Sect. 2. The sufficiency in Theorem D is established in Sect. 3.
The proof has an algorithmic nature and utilizes arguments which yield information also when the sufficient conditions do not hold. The algorithm is fairly simple
and is displayed in Sect. 4 along with an estimate on the number of steps it takes
to conclude the algorithm. In Sect. 5 we provide some corollaries of Theorem D
along with other comments, examples and counterexamples within the framework
of discrete time systems. In Sect. 6 we verify Theorem S and provide some comments
and examples.
2.
The Necessity in Theorem D
The necessity of the condition cAdj(A)b = 0 (equivalently, γ1 = 0 in the controller
canonical form) follows from two simple observations.
Lemma 2.1. Det(A + kbc) = Det(A) + kcAdj(A)b (equivalently, Det(A + kbc) =
(α1 + kγ1 )(−1)n+1 when A, b, c are given in the controller canonical form (1.3)).
Proof. The claims are straightforward for the canonical form and hold in the general case since all the expressions are invariant under similarity transformations.
(The result for A nonsingular is recorded in Aeyels and Willems [2] where it is used
in the same manner we use it.)
Lemma 2.2. If in the controller canonical form γ1 = 0 then α1 = 0 is a necessary
condition for observability.
Proof. Suppose that both γ1 and α1 are equal to 0. Consider the vector x0 =
(1, 0, . . . , 0)T (the superscript T signifies transposition). Then Ax = 0. Hence Aj x =
0 for any j > 0. If in addition γ1 = 0 then cx = 0 which together with the obvious
equalities cAj x = 0 for all j > 0 contradict observability.
Completion of the proof of the necessity in Theorem D. As was pointed out after
the statement of Theorem D it is enough to establish the result for a system in the
canonical form. Now notice that Lemma 2.1 implies that when γ1 = 0 the equality |Det((A + kj bc) · · · (A + k1 bc)(A + k0 bc))| = |α1 |j +1 holds for any sequence
k0 , . . . , kj of inputs. Also, under observability Lemma 2.2 implies that whenever
γ1 = 0 then α1 = 0. Therefore, if observability holds and γ1 = 0 then no sequence
of inputs can generate the zero matrix and the nullification property does not hold.
Z. Artstein and G. Weiss
Remark 2.3. The condition cAdj(A)b = 0 which characterizes nullification is
also related to stabilization. In fact, it is a necessary condition for linear output
feedback stabilization when |Det(A)| ≥ 1 (which amounts to |α1 | ≥ 1 when the system is in the canonical form). Indeed, when cAdj(A)b = 0 and |Det(A)| ≥ 1 then
by Lemma 2.1 the determinant of any matrix generated by a sequence of inputs
has absolute value greater than or equal to 1; this prohibits linear output feedback
stabilization.
3.
The Sufficiency in Theorem D
First we simplify the arguments of the main construction and offer a more specific
property which, in turn, guarantees linear output feedback nullification.
Proposition 3.1. Suppose that the system (1.1) has the property that for every x0 ∈
R n the equality
(A + kj0 bc) · · · (A + k1 bc)(A + k0 bc)x0 = 0
(3.1)
holds for an appropriate choice (which may depend on x0 ) of numbers k0 , . . . , kj0 .
Then the system is linear output feedback nullifiable.
Proof. We choose n linearly independent vectors v1 , . . . , vn . For v1 we utilize (3.1)
with x0 = v1 to determine a matrix, say M1 , of the form appearing in (3.1) such that
M1 v1 = 0. Inductively, for vj +1 we use (3.1) with x0 = Mj · · · M1 vj +1 to determine a
matrix, say Mj +1 of the form appearing in (3.1) such that Mj +1 Mj · · · M1 vj +1 = 0.
Clearly, Mn · · · M1 x0 = 0 for every x0 and Mn · · · M1 has the desired form of a
feedback from the output. This completes the proof.
With the preceding result the proof of the sufficiency would be complete if we
prove the following result.
Proposition 3.2. Suppose that the system (1.1) is controllable and observable and
that cAdj(A)b = 0. Then for every fixed x0 ∈ R n the equality
(A + kj0 bc) · · · (A + k1 bc)(A + k0 bc)x0 = 0
(3.2)
holds for a certain sequence (which may depend on x0 ) of numbers k0 , . . . , kj0 .
The rest of the section is devoted to the proof of the preceding proposition. The
proof utilizes a method of introducing free variables whose possible realizations
play a major role. In spite of the apparently indirect method, an outcome of the derivation is a simple algorithm whose characteristics and an estimate of its conclusion
time are displayed in the next section.
We find it very convenient to prove the result for a system in the controller canonical form. As was pointed out after the statement of Theorem D it is enough to verify
the result for a system in this form (see Remark 4.2 on the general case). We start
with an algorithm and draw some consequences on its outcome with no reference to
the conditions of the proposition. Then we derive some properties of the outcome
under the observability assumption. Only then do we utilize the condition γ1 = 0
to verify the nullification.
State Nullification by Memoryless Output Feedback
Let x = (ξ1 , . . . , ξn )T be in R n (the superscript T signifies transposition). A
useful observation is that when A is applied to x the result is the vector Ax =
(ξ2 , . . . , ξn , ax)T where a = (α1 , . . . , αn ) is the bottom row of A; namely, the first
n − 1 coordinates of Ax are formed by a shift of the last n − 1 coordinates of x and
the last coordinate of Ax is equal to ax.
Construction 3.3. Starting with x0 = (ξ0,1 , . . . , ξ0,n )T we generate the sequence
x1 , x2 , . . . , as follows.
(i) If cx0 = 0 we define x1 = Ax0 .
(ii) If cx0 = 0 we define x1 = (ξ0,2 , . . . , ξ0,n , δ1 )T where δ1 is a variable whose
value will be determined later; namely, x1 is formed by shifting the last n − 1
coordinates of x0 and adding the free variable δ1 as the n-th coordinate.
Inductively, suppose that xj = (ξj,1 , . . . , ξj,n )T has been constructed.
(i) If cxj = 0 for any choice of numerical values of the free variables δi for i ≤ j ,
we define xj +1 = Axj . We denote then the last coordinate of xj +1 by σj +1 ,
namely σj +1 = axj . (The coordinates of xj +1 , including σj +1 , may still be
functions of the free variables δi which have been introduced earlier.)
(ii) If cxj = 0 for some numerical realization of δi for i ≤ j we introduce a new
free variable δj +1 and define xj +1 = (ξj,2 , . . . , ξj,n , δj +1 )T ; namely, xj +1 is
formed by shifting the last n − 1 coordinates of xj and adding the variable
δj +1 as the n-th coordinate.
The free variables will be used in the conclusion of the proof to determine (via
the equations δi+1 = (a + ki c)xi ) the desired feedback ki . Meanwhile, the previous
construction determines a sequence x0 , x1 , . . . , such that the coordinates of xj are
either free variables δi or terms σi which are functions of the free variables with
index less than j and of the coordinates of x0 . Notice that the index i of δi or of σi
indicates the first time at which the term appears in the construction. In particular,
if a free variable δi or a term σi occupies the l-th coordinate of xj then i = j − n + l.
Observation 3.4. A coordinate σj is an affine (namely, linear plus a constant shift)
function of the free variables δi for i < j .
Proof. Follows easily from the construction.
Next recall that a free variable appears as the n-th coordinate of xj +1 if for at
least one realization of the free variables δi for i ≤ j the expression cxj is not zero.
Let f (j ) be the number of the free variables δi appearing in the process for i ≤ j
(here f stands for free). We consider the possible realizations of the sequence δi for
i ≤ j to be elements in R f (j ) , the f (j )-dimensional linear space.
Claim 3.5. If for one realization of the free variables δi , i ≤ j , the inequality cxj = 0
holds, then it holds for an open dense set in R f (j ) .
Proof. The claim is valid since in view of Observation 3.4 the constraint cxj = 0
is an affine constraint on R f (j ) ; hence either cxj = 0 for any vector in R f (j ) or
cxj = 0 determines an (f (j ) − 1)-dimensional linear manifold within R f (j ) .
Z. Artstein and G. Weiss
Claim 3.6. For every j there is an open dense set of realizations in R f (j ) of the free
variables for which cxi = 0 for all the indices i ≤ j such that cxi = 0 for some
realization of the free variables (namely, for all i ≤ j such that the last coordinate of
xi+1 is δi+1 ).
Proof. A finite intersection of open dense sets is open and dense; in fact, the open
dense set is the complement of a finite union of (f (j ) − 1)-dimensional linear manifolds within R f (j ) .
Notation 3.7. We check how many free variables δi with indices i arising in the n
steps before j , namely, satisfying j − n + 1 ≤ i ≤ j , appear as coordinates of the
vector xj . We denote this number by d(j ) (here d stands for dimension, indeed, d(j )
will be the dimension of an appropriate linear space). We also check how many of
these free variables appear in the last n − l0 + 1 coordinates of xj where l0 is the last
coordinate of c for which γl0 = 0. We denote the number of such free variables by
r(j ) and denote r ∗ = n − l0 + 1. In particular, r ∗ is an upper bound for r(j ).
Observation 3.8. d(j +n) is equal to the number of indices i ∈ {j, j +1, . . . , , j +n−
1} such that item (ii) in the construction applies, namely, cxi = 0 for some realization
of the free variables with index less than or equal to i.
Proof. The claim follows from the shift structure of consecutive vectors. Specifically, the l-th coordinate of xj +n is a free variable if and only if cxj +l−1 = 0.
Claim 3.9.
r(j ) is a nondecreasing sequence.
Proof. The shift operation implies that r(j ) decreases only if a free variable δi
moves from position l0 to position l0 − 1 and the new entry at position n is σj +1 .
But when a free variable δi occupies position l0 it appears in the expression cxj
with a nonzero coefficient, namely γl0 . Hence a realization of the free variables
exists such that cxj = 0 and then the n-th coordinate of xj +1 is δj +1 . In particular,
r(j + 1) ≥ r(j ).
Claim 3.10. Suppose that r(j ) reaches its maximum value at j = m∗ . For j ≥ m∗
the sequence indicating whether the last coordinate of xj is a free variable is periodic
with period r ∗ . For j ≥ m∗ + n − r ∗ the sequence d(j ) is also periodic with period r ∗ .
(In both cases r ∗ may not be the minimal period.)
Proof. The first assertion follows from the argument utilized in Claim 3.9, namely,
for j > m∗ a new free variable appears exactly after a free variable moves from the
coordinate l0 to the coordinate l0 − 1. The second claim follows since the coordinate
n − l of xj is identical to the n-th coordinate of xj −l and therefore after n − r ∗
more steps d(j ) is determined by the distribution of free variables in the last r ∗
coordinates of xj .
The following auxiliary construction is needed in order to establish more properties of the preceding algorithm. Once these properties are derived (see Summary
3.18) the auxiliary construction is not used any more.
State Nullification by Memoryless Output Feedback
Auxiliary Construction 3.11. For a fixed index j0 consider the sequence of n + 1
vectors xj0 , . . . , xj0 +n . The coordinate l of, say, xj is either the free variable δi with
i = j − l or the term σi which is an affine function of the free variables with index
smaller than i. Now we adjust the sequence of vectors xj0 , . . . , xj0 +n as follows.
When a new free variable δj +1 appears after the j0 step, namely, it appears as the
last coordinate of xj +1 for j = j0 , . . . , j0 + n, we replace it by axj . With this
agreement all the coordinates of xj for j = j0 , . . . , j0 + n become affine functions
of the free variables appearing as coordinates in xj0 . Another very useful feature
of the adjustment is that for any realization of the free variables δi for 1 ≤ i ≤ j0
we have xj +1 = Axj for j = j0 , . . . , j0 + n − 1. Now we make another ad hoc
adjustment and fix a realization of all the free variables δi with index i between 1
and j0 − n (namely, those which appear in the coordinates of xj0 only through an
affine expression determined earlier). With such a choice we are left with only d(j0 )
free variables appearing in xj0 . Hence, only these free variables may now appear in
xj for j = j0 , . . . , j0 + n. (The two adjustments, including the realization just chosen, are done in order to verify several properties of Construction 3.3; in particular,
they will not play a role in the final steps of the proof.) Denote the linear space of
the realizations of the remaining free variables by R d(j0 ) . Consider the mapping, say
L, which assigns to an element in R d(j0 ) the string (cxj0 , . . . , cxj0 +n−1 ) of observations. Notice, however, that by Construction 3.3 cxj = 0 unless originally xj +1 had
δj +1 as its n-th coordinate. By Observation 3.8 the number of such coordinates is
d(j0 + n). Ignoring the coordinates in the range of L, which are guaranteed to be 0,
we consider L as a mapping from R d(j0 ) to the d(j0 + n)-dimensional space of those
cxi where cxi is not guaranteed to vanish. Notice that the mapping L is affine.
At this point, based on observability only, we eliminate some possibilities concerning the outcome of Construction 3.3. The condition γ1 = 0 is still not being
used.
Lemma 3.12. When (1.3) is observable the mapping L defined in Auxiliary Construction 3.11 is one to one. If d(j0 ) = d(j0 + n) then the mapping L is one to one and
onto.
Proof. If the mapping is not one to one, there are two realizations of the free
variables δi in R d(j0 ) which give rise to two distinct dynamics of length n + 1 of
the homogeneous system (i.e. xj +1 = Axj ) with the same observations cxi . This
contradicts observability. A one-to-one affine map is onto when the dimensionality
of the range and the domain coincide.
Lemma 3.13. When (1.3) is observable then d(j + n) ≥ d(j ). Furthermore, if d(j +
n) = d(j ) then d(j + n + 1) ≥ d(j + n).
Proof. Since the mapping L defined in Construction 3.11 is an affine map from
R d(j ) to R d(j +n) it cannot be one to one if d(j + n) < d(j ). This verifies the first
claim. If the second claim is false then d(j + n + 1) < d(j + n). This implies that the
last coordinate of xj +n+1 is not a free variable. By the first part d(j +n+1) ≥ d(j +1)
hence the condition d(j + n) = d(j ) implies that the last coordinate of xj +1 is not
a free variable as well. If the last coordinates of both xj +1 and xj +n+1 are not free
Z. Artstein and G. Weiss
variables it follows that the dimension of the mapping L defined in Auxiliary Construction 3.11 (which measures the number of times for which cxj = 0), has the
same dimension as the range of the mapping L when the construction is applied to
j + 1. Since the latter range is less than d(j )-dimensional we obtain a contradiction
to the one-to-one property of L established in Lemma 3.12.
Lemma 3.14. When (1.3) is observable and d(j0 ) = d(j0 +n) then in Auxiliary Construction 3.11 the coordinates of the vectors xj0 , . . . , xj0 +n are in fact linear (rather
than affine) functions of the free variables present as coordinates of xj0 .
Proof. By Lemma 3.12 the mapping L is onto. Hence the zero vector is in its range.
This means that for a certain choice of the free variables δi present as coordinates
in xj0 , the vector of observations cxj , j = j0 , . . . , j0 + n − 1 is the zero vector
(those coordinates in the range of L are zero due to the choice of δi and the rest
are guaranteed to be equal to zero). Since the construction generates a dynamics of
the homogeneous system xj +1 = Axj it follows from observability that the given
choice of δi yields xj0 = 0 and hence xj = 0 for j = j0 , . . . , j0 + n. In particular, all
the chosen free variables δi are equal to zero and likewise the values of the coordinates. The latter are affine functions of the free variables. Hence they must be linear
functions as claimed.
Lemma 3.15. Suppose that (1.3) is observable and suppose that for n consecutive
times j0 , . . . , j0 + n − 1 the equality d(j ) = d(j + n) holds. Then in Auxiliary Construction 3.11 the coordinates of xj0 , . . . , xj0 +n which are not free variables themselves
are actually equal to 0. If in addition d(j ) is constant for j = j0 , . . . , j0 + n − 1 then,
if the last coordinate of xj +1 is a free variable, then the first coordinate of xj is also a
free variable.
Proof. Consider a coordinate of xj0 which is not a free variable itself. Say it is σi
and it occupies the coordinate l. Then σi occupies the first coordinate in xj0 +l−1 .
Applying Lemma 3.14 with a new starting point, namely j0 + l − 1, implies that σi
is a linear function of the free variables appearing as coordinates of xj0 +l−1 . The
definition of σi , however, as defined in item (ii) of Construction 3.3 implies (see
Observation 3.4) that σi is an affine function of free variables with index less that i.
Being, at the same time, a linear function of a set of variables and an affine function
of another set of variables where the intersection of the two sets is empty implies
that σi = 0. The last claim follows since the only way a free variable exits the array is
when it occupies the first coordinate. The claim follows then since d(j ) is constant.
The following derivations refer to the index m∗ which was introduced in Claim
3.10, namely, the index in which the number of free variables between the coordinate
l0 and the coordinate n reaches maximum. Recall that r ∗ = n − l0 + 1 where l0 is
the last coordinate of c which is not equal to zero.
Lemma 3.16. When (1.3) is observable the equality d(j + n) = d(j ) holds for all
j ≥ m∗ + n − r ∗ .
State Nullification by Memoryless Output Feedback
Proof. By Claim 3.10 the sequence d(j ) is periodic with period r ∗ . Hence d(j +
nr ∗ ) = d(j ). By Corollary 3.13 d(j ) ≤ d(j + n) ≤ d(j + nr ∗ ). Therefore d(j ) =
d(j + n).
In the preceding argument we could replace nr ∗ by LCM(n, r ∗ ), the least common
multiple of n and r ∗ . This will become handy in the estimates displayed in Sect. 4.
Lemma 3.17.
sequence.
When (1.3) is observable then d(j ) for j ≥ m∗ + n − r ∗ is a constant
Proof. Follows from the periodicity established in Lemma 3.16 and from Lemma
3.13.
At this point we wish to summarize the conclusions obtained by the preceding
arguments. The realization of the free variables employed in Auxiliary Construction
3.11 and in the proofs of the previous results do not play a role any more. (Also, the
condition γ1 = 0 has not been used yet. It will be used in the next Lemma.)
Summary 3.18. When Construction 3.3 is applied to an observable system, then
for j ≥ m∗ + n − r ∗ the coordinates of the vector xj are either free variables with
index i between j − n + 1 and j , or they are equal to 0. Furthermore, if the last
coordinate of xj +1 is a free variable, then the first coordinate of xj is also a free
variable.
The arguments leading to Summary 3.18 are that Lemmas 3.16 and 3.17 establish
that the conditions assumed in Lemma 3.15 actually hold when j0 ≥ m∗ + n − r ∗ .
Hence the conclusions of Lemma 3.15 hold for j0 ≥ m∗ + n − r ∗ . In what follows
we use the condition γ1 = 0.
Lemma 3.19. Suppose that (1.3) is observable and that γ1 = 0. Fix j0 ≥ m∗ +n−r ∗ .
For any j , between j0 and j0 + n − 1, such that xj has a free variable as its first coordinate, consider the constraint cxj = 0 defined, however, on the realizations of the vector
xj where all the coordinates are zero except those containing free variables with index
less than or equal to j0 . Such a constraint defines an open dense set in the space R f (j0 ) ,
namely, the space of the realizations of the free variables δi with index less than or
equal to j0 .
Proof. Follows from the condition γ1 = 0. Indeed, when for j0 ≤ j ≤ j0 + n the
first coordinate of xj is a free variable, then this free variable is a coordinate of xj0 .
Thus, it can be assigned any value in the chosen realization. Since in the expression
cxj this free variable appears only as a multiple of the nonzero term γ1 there is a
choice of the free variables for which the inequality cxj = 0 holds and therefore it
holds for an open dense set of realizations.
Conclusion of the proof of Proposition 3.2. For j0 = m∗ + n − r ∗ we choose a realization of the free variables δi with index less than j0 + n which meets all the constraints cxj = 0 as identified in Claim 3.6 and the d(j0 ) constraints defined in
Lemma 3.19. The latter lemma guarantees the existence of an open dense set of
such realizations. Such a realization of the free variables induces a realization of
Z. Artstein and G. Weiss
all the vectors xj for j = 0, . . . , j0 . For the remaining vectors, namely for xj with
j = j0 + 1, . . . , j0 + n, we define xj by a shift of the coordinates of xj −1 with the
insertion of a zero as the n-th coordinate. It is clear then that xj0 +n is the zero vector.
The proof will be complete if we show that for each j ≥ 0 the vector xj +1 can be
written as xj +1 = (A + kj bc)xj for an appropriate choice of kj . Construction 3.3
implies that if prior to the realization the vector xj did not contain a free variable
as its n-th coordinate, the representation holds with kj = 0. This is true also for the
last n vectors in the sequence since for these vectors if prior to the realization the last
coordinate was not a free variable it is bound to be 0 (see Lemma 3.15). When the
last coordinate of xj +1 was a free variable prior to the realization, the choice of the
realization guarantees that cxj = 0. Then with appropriate choices of kj any value
can be obtained as the last coordinate of the vector xj +1 = (A+kj bc)xj . Indeed, the
last coordinate is given by the expression (a + kj c)xj which can be made arbitrary
when cxj = 0. Now we choose kj such that the last coordinate is consistent with the
chosen realization for j ≤ j0 and is equal to 0 when j0 < j ≤ j0 + n. This completes
the proof of Proposition 3.2 (and thus the proof of the sufficiency in Theorem D is
complete).
4.
The Algorithm and an Estimate
In this section we extract from the proof of the sufficiency part of Theorem D a
rather simple algorithm which results in the determination of the nullifying feedback parameters. Following the statement of the algorithm we provide an estimate
for the time within which the algorithm is concluded. We also display some comments and illustrative examples.
Algorithm 4.1. For simplicity we outline the algorithm which corresponds to Proposition 3.2, namely, a vector x0 is given and a linear output feedback which nullifies
x0 is sought. The general nullifying feedback is then a composition of n such nullifying results as described in Proposition 3.1.
Step 1. For the prescribed x0 follow Construction 3.3 until the coordinates of n+1
consecutive vectors (say, the vectors xj0 , . . . , xj0 +n ) are either free variables δi for
i bigger than j0 − n + 1 or are equal to 0 and furthermore, if the last coordinate
of xj +1 among the n + 1 vectors is a free variable then the first coordinate of xj is
also a free variable. (We later give a bound on the time at which the conditions are
fulfilled.)
Step 2. Set the free variables δi for i > j0 to be equal to 0.
Step 3. Determine a realization of the free variables δi which arise throughout the
process for i ≤ j0 such that when before the realization the last coordinate of a vector xj (for any j ≤ j0 + n) has been a free variable the inequality cxj −1 = 0 holds.
(The determination of the realization can be carried out successively, identifying in
each step j a cube of feasible realizations for the free variables arising before the
j -th step.)
State Nullification by Memoryless Output Feedback
Step 4. For any vector xj such that in step 1 the last coordinate of xj was δj , choose
kj such that (a + kj c)xj −1 equals the chosen value of δj in the realization in steps 2
and 3. When the last coordinate of xj has not been a free variable δj choose kj = 0.
As is clear from the proof of Proposition 3.2, the sequence kj identified in step 4
will nullify x0 utilizing a linear output feedback.
Remark 4.2. The preceding algorithm exploits the structure of the canonical form.
It is possible to rephrase it in terms of the original form (1.1). Indeed, notice that
in Construction 3.3 when cxj = 0 the next vector is defined by xj +1 = Axj and
when cxj = 0 the next vector could be given by xj +1 = (A + δj +1 bc)xj (we use here
indexing in line with the proof of the main result). This change would replace the
feedback gain kj by a free variable. In the form just mentioned the two definitions
are coordinate free and could be carried out in the original coordinate form. When
step j0 of Algorithm 4.1 is reached the resulting array would be (after the change
back of the variables) the one obtained with the canonical form if rather than the
free variable δi we would use the equivalent expression (a + δj +1 c)xj . Thus, steps
2–4 of Algorithm 4.1 could be carried out with the apparent modifications due to
the form of the new free variables. In particular, rather than employing the fact that
δi appears as the first coordinate of xj in order to determine δj +1 we should use the
free variable kj when cxj = 0 to determine the next vector. The resulting modification of the algorithm is straightforward. The computation of the desired feedback
control may then be more cumbersome than in the canonical form but it would save
the need to compute the similarity transformation leading to the canonical form. In
general, when a system of the form (1.1) is encountered it may be better to compute
the term cAdj(A)b than to work out the appropriate change of variables and check
the coordinate γ1 . A case for a stable computation of the adjoint is made in Stewart
[12].
Estimate 4.3. We provide an upper bound on the number of steps within which
the time index j0 in step 1 of the algorithm is reached. Recall that r∗ = n − l0 + 1
where l0 is the last coordinate of c which is not equal to zero. Recall that LCM(n, r ∗ )
stands for the least common multiple of the integers n and r ∗ . We claim that
j0 ≤ 2n − r ∗ + (r ∗ − 1) max(3n, LCM(n, r ∗ )).
(4.1)
In order to verify the claim notice that what is needed in order to deduce Summary 3.18, Lemma 3.19 and the conclusion of the proof, is that d(j ) = d(j + 1)
for 2n consecutive indices j . See Lemma 3.15. It follows from Lemma 3.13 that
d(j ) = d(j + 1) for 2n consecutive indices if d(j ) = d(j + n) for n + 1 consecutive
indices. In order that the equality d(j ) = d(j +n) holds for n+1 consecutive indices
j it is enough that r(j ) be constant over a period of length max(3n, LCM(n, r ∗ )).
This follows from the argument in the proof of Lemma 3.16. The count r(j ) does
not decrease (Claim 3.9) thus the process would end unless there is an increase
within max(3n, LCM(n, r ∗ )). Observability implies that the first increase from 0 to
1 occurs within n steps. Thus an estimate for the occurrence of the maximal number
of increases in the count r(j ) is the right-hand side of (4.1) with n replacing 2n. The
Z. Artstein and G. Weiss
extra n − r ∗ is needed since d(j ) may become constant only n − r ∗ steps after r(j )
becomes constant (Observation 3.8).
Remark 4.4. Step 1 in Algorithm 4.1 ends when an array of vectors with coordinates being either free variables or zeros is obtained. In many cases the array
contains only free variables (then the algorithm is concluded by setting the free
variables which follow as zeros). A prime example for this phenomenon is the case
where γn = 0, where γn is the last coordinate of c, namely, r ∗ = 1. Then, once a free
variable δj is introduced (and observability implies that this happens for j ≤ n),
the condition γn = 0 implies that free variables are introduced in all the vectors
that follow. The index j0 where step 1 in Algorithm 4.1 is concluded will then occur
within 2n steps as provided by (4.1) and it is easy to come up with an example where
the 2n steps are needed.
There are cases, however, where entries equal to 0 cannot be avoided. Such a case
is where c = (1, 0, . . . , 0) and a = (1, 0, . . . , 0). For x0 = (1, 0, . . . , 0)T carrying
out step 1 in the algorithm results in an array of vectors each having a free variable
in one of its coordinates and zeros elsewhere. For a general initial vector x0 any zero
coordinate will be maintained throughout the process.
Example 4.5. We demonstrate the nullification algorithm on a two-dimensional
example. Consider the system
01
0
A=
, b=
, c = (1, 0).
(4.2)
14
1
According to our main result it can be nullified utilizing linear output feedback.
Following the suggested algorithm we start with, say, x0 = (1, 0)T . Since cx0 = 0
we write x1 = (0, δ1 )T . The algorithm suggests continuing until we get the array
described in step 1 of Algorithm 4.1, but a direct inspection reveals that setting
δ1 = 0 will nullify x0 . The value δ1 = 0 can be generated by the solution of the
equation (a + kc)x0 = 0. The solution is clearly k = −1. Hence the output feedback
k0 = −1, namely the matrix (A − bc), nullifies x0 .
The next move is to check what happens to, say, (0, 1)T under the just-identified feedback sequence. A direct inspection reveals that (A − bc)(0, 1)T = (1, 4)T .
We denote the latter vector again by x0 and apply the algorithm. Now cx0 = 0,
hence x1 = (4, δ1 ). Likewise, cx1 = 0 for some (in fact, any) choice of δ1 , hence
x2 = (δ1 , δ2 ). A direct inspection reveals that the realization δ1 = δ2 = 0 is within
the identified open dense set. Thus, the solution to (a +kc)x0 = 0, which is k = −17,
results in δ1 = 0 and the solution of (a+kbc)(4, 0)T = 0, which is k = −1 results now
in δ2 = 0 and therefore the sequence which nullifies (1, 4)T is k0 = −17, k1 = −1.
All in all, a sequence of linear output feedback which nullifies the system (4.2) is
determined by the control parameters (k0 , k1 , k2 ) = (−1, −17, −1), resulting in the
matrix (A − bc)(A − 17bc)(A − bc).
Notice that our algorithm has produced a nullifying feedback of three steps,
which is less than what is guaranteed in (4.1) and equals what is guaranteed in
Aeyels and Willems [2] for the nonzero pole placement for a generic set of systems, see
Remark 5.1.
State Nullification by Memoryless Output Feedback
It is easy to see that (4.2) can neither be nullified nor stabilized with a time-invariant output feedback. Indeed, for any control feedback k the resulting matrix A+kbc
will have an eigenvalue with absolute value greater than one.
5.
Comments, Consequences
We collect in this section some comments, examples and counterexamples related
to Theorem D and its proof, including some consequences concerning systems for
which the condition γ1 = 0 does not hold and comparisons with the literature.
Remark 5.1. Aeyels and Willems examined in [2] the pole assignment by output
feedback for the discrete-time system (1.1); see also Aeyels and Willems [1]. They
established the remarkable result that generically with a proper choice of feedback
gains k0 , k1 , . . . , kn the poles of the transfer function of the matrix (A+kn bc) · · · (A+
k1 bc)(A + k0 bc) can be placed arbitrarily provided that no desired pole is at zero.
In particular, the eigenvalues of the composite matrix can be determined provided
that no zero eigenvalue is sought. From this respect our result is complementary to
[2]. The specific result in [2] phrased in the transfer function form (with, however,
the notations of the present paper)
c(I z − A)−1 b =
γ1 + γ2 z + · · · + γn zn−1
α1 + α2 z + · · · + αn zn−1 + zn
(5.1)
is, roughly, as follows. When A is (without loss of generality) nonsingular, the ratios
αi /γi , i = 1, . . . , n are distinct and with an additional controllability condition on
the closed loop, the poles of the system can be assigned arbitrarily as mentioned,
provided that no desired pole of the closed loop is at the origin. The verification
of the result is algorithmic. Aeyels and Willems [2] note that the conditions are
not necessary conditions but that γ1 = 0 (which is our characterization for nullifiability in general) is necessary. Notice that the sufficient condition suffices for pole
assignment (except at zero) in n + 1 steps.
The sufficient condition in Theorem D use the inequality γ1 = 0. The method of
proof, however, has some implications concerning the case γ1 = 0, as follows.
Proposition 5.2. Suppose that the system (1.1) is controllable and observable. Then
any initial condition x0 can be driven with linear output feedback arbitrarily close to
the origin. This can be achieved within a time period bounded by the right-hand side
of (4.1).
Proof. As follows from Theorem D when γ1 = 0 the initial condition can even be
nullified. Notice, however, that the property γ1 = 0 was used in the proof of Proposition 3.2 only after Summary 3.18. In particular, regardless of the value of γ1 ,
Algorithm 4.1 will result within the time estimate (4.1) in an array containing only
free variables and zeros. Also, a realization of the free variables in an open dense set
can be made an outcome of appropriate linear output feedback as explained in the
conclusion of the proof of Proposition 3.2. The open dense set of such free variables
contains elements arbitrarily close to the origin. This completes the proof.
Z. Artstein and G. Weiss
The advantage of driving a state close to the origin while having the origin fixed,
as guaranteed by the previous result, is apparent.
Example 5.3. We illustrate some of the preceding arguments with two-dimensional
examples (the case γ1 = 0 was illustrated in Example 4.5). Consider the system
0
0 1
, b=
, c = (0, γ2 )
(5.2)
A=
1
α1 α2
where γ2 = 0. The form (5.2) is the general form of a controllable and observable
two-dimensional system where γ1 = 0.
(i) When |α1 | ≥ 1 the system cannot be stabilized with a linear output feedback. However, as an example consider (α1 , α2 ) = (4, 0) and the initial condition
x0 = (1, 2)T which is an eigenvector of the eigenvalue 2 of the uncontrolled dynamics. We apply now step 1 in Algorithm 5.1. Since cx0 = 0 we define x1 = (2, δ1 )T .
Since cx1 = 0 for some choice of δ1 we define x2 = (δ1 , δ2 )T . At this point, if
we could set both δ1 = 0 and δ2 = 0, we could nullify the vector. But the choice
δ1 = 0 is not within the dense open set identified in the proof of the main result.
We can, however, determine realizations of the free variable arbitrarily close to the
origin, say, δ1 = ε and δ2 = 0. Solving (a + k0 c)x0 = ε results in k0 = ε/2 − 2 and
employing the latter control results in x1 = (2, ε)T . Solving (a + k1 c)x1 = 0 results
in k1 = −8ε −1 . Hence applying the control sequence (k0 , k1 ) = (ε/2 − 2, −8ε −1 )
would shift x0 = (1, 2)T to the vector (ε, 0)T that is arbitrarily close to zero.
(ii) When |α1 | < 1 the linear output feedback with k = −α2 γ2−1 transfers the
system to the form (5.2) with α2 = 0. Then both eigenvalues are within the unit disc
which means that the system is now stable.
Our findings concerning two-dimensional systems provide characterizations of
both output feedback stabilizations and output feedback nullification. They are
summarized in the following proposition. Results for two-dimensional systems have
been obtained by Leonov [7]. See also Aeyels and Willems [1] for the possibility of
pole assignment for two-dimensional systems.
Consider the general two-dimensional system
β1
α1,1 α1,2
, b=
, c = (γ1 , γ2 ).
A=
(5.3)
α2,1 α2,2
β2
The adjoint of A is then
Adj(A) =
α2,2 −α1,2
−α2,1 α1,1
(5.4)
and the term cAdj(A)b can easily be computed.
Proposition 5.4. Suppose that (5.3) is controllable and observable. The system is linear output feedback nullifiable if and only if cAdj(A)b = 0. In particular the condition
is sufficient for linear output feedback stabilization. When cAdj(A)b = 0 the system
is linear output feedback stabilizable if and only if Det(A) < 1, in which case it is stabilizable with a time-independent linear output feedback. The condition Det(A) < 1 is
not sufficient for stabilization with a time-independent linear output feedback in case
cAdj(A)b = 0.
State Nullification by Memoryless Output Feedback
Proof. The first claim and the necessity part of the second claim form a restatement
of Theorem D in the two-dimensional case. The sufficiency part of the second claim
follows from item (ii) of Examples 5.3. A counterexample verifying the last claim
is Example 4.5 (or rather, the equivalent variation obtained by setting α1,1 = 0 in
(4.2)).
Remark 5.5. The conclusion of the preceding result concerning time independent
stabilization in case Det(A) < 1 and cAdj(A)b = 0 does not hold in more than
two dimensions. For example, the system determined by a = (− 21 , −5, 0) and c =
(0, 2, 1) is controllable and observable and the relevant determinant is equal to − 21 .
Yet the system cannot be stabilized with a time-independent output feedback. To see
that notice that when the control is determined by u(cx) = kcx the resulting dynamics is driven by the matrix in the canonical form determined by a = (− 21 , −5+2k, k).
Its characteristic polynomial is
1
P (λ) = −λ3 + λ2 k + λ(−5 + 2k) − .
2
(5.5)
We claim that for any choice of k there is a root of the polynomial (5.5) in the open
interval (− 21 , 21 ). We assume the contrary. Then, since at P (0) < 0 it follows that
P (− 21 ) < 0 and P ( 21 ) < 0. The latter two inequalities amount to 58 (2k − 5) < 0 and
1
1
8 (17 − 6k) < 0. Namely, assuming no root has absolute value less than 2 implies
5
17
k < 2 and k > 6 which is a contradiction. If there exists an eigenvalue with absolute value less than 21 then, since the determinant is equal to − 21 , it follows that an
eigenvalue with absolute value greater than 1 should exist as well. In particular, for
any choice of k as a time-independent output feedback the system is not stable.
6.
The Proof of Theorem S
The verification of Theorem S is an easy consequence of the following result.
Proposition 6.1. Suppose that c = 0 and b = 0 and that the continuous system (1.4)
is either controllable or observable. Then cAdj(Ah )bh = 0 for all sampling periods h
except for a discrete set.
Proof. It is known that the adjoint of a nonsingular matrix is its inverse matrix
multiplied by its determinant. See e.g. Hohn [4, page 93]. Since Ah = ehA is not
singular it follows that Adj(Ah ) = Det(ehA )e−hA . Using the expression in (1.6) for
bh we write
 h

h
q(h) = ce−hA e(h−s)A bds = c  e−sA ds  b.
(6.1)
0
0
Since the determinant of
is not zero it follows that q(h) = 0 if and only if
cAdj(Ah )bh = 0. Since q(·) is analytic it is enough to verify that q(·) is not identically equal to zero. Differentiating q(·) yields
ehA
Z. Artstein and G. Weiss
d
q(h) = ce−hA b.
dh
(6.2)
Now assuming q(·) ≡ 0 leads to a contradiction. Indeed, differentiating the expression (6.2) j times with respect to h and plugging h = 0 yields cAj b = 0 for j =
0, 1, . . . , n−1. Since c = 0 and b = 0 the n equalities contradict both controllability
and observability. This completes the proof.
Conclusion of the proof of Theorem S. It is known that the controllability and
observability of the continuous system (1.4) implies that except for a discrete set of
sampling periods h the discrete time system (1.5) is also controllable and observable.
See Sontag [11, Chapter 3 Theorem 4]. In view of Proposition 6.1, except for
a discrete set of sampling periods h the discrete-time system (1.5) is controllable, observable and satisfies cAdj(Ah )bh = 0. Theorem S now follows from
Theorem D.
A result analogous to Theorem S where, however, a complete state feedback is
allowed is verified in Mita and Nam [8]; it is shown in [8] that when feedback from
the state is available the number of samples needed to nullify the system is equal to
n. Our examples show that one may need more than n observations when only a
single output is available.
Remark 6.2. We remind the reader that employing Algorithm 4.1 the nullifying
control for the sample data discretization of a continuous system can be computed.
Remark 6.3. Theorem S adds an interesting facet to the output stabilization problem displayed in Brockett [3], namely, the problem of stabilizing (1.4) while utilizing a continuous time-dependent feedback u = k(t)bcx. The answer to this problem is indeed complex. A sample of papers addressing the problem is Leonov [6],
Moreau and Aeyels [9], [10]. These papers provide explicit examples where stabilization may not be possible with a continuous, even time-varying, feedback. Our
approach does not contribute directly to the original problem but shows that via
sampling a much stronger property, namely nullification, is obtained. The advantages of stabilizing with time varying output feedback in the sampled data framework
is well documented in the literature, see Aeyels and Willems [1, 2], Kabamba [5]. The
latter paper establishes the possibility to nullify the system with generalized sampled-data hold functions, namely, when a preprocessed time-varying input between
samplings is allowed. Needless to say, nullification cannot be achieved at all with a
continuous linear feedback.
We provide a telling example.
Example 6.4. Consider the controlled harmonic oscillator where only the state is
observed directly, namely, the two-dimensional system given by
dx
0 1
0
=
x+
u
−1 0
1
dt
y = (1, 0)x.
(6.3)
State Nullification by Memoryless Output Feedback
The h-sampling system for (6.3) is easy to compute. It is given by
cos(h) sin(h)
1 − cos(h)
xj +1 =
xj +
u
− sin(h) cos(h)
sin(h)
y = (1, 0)x.
(6.4)
Using Sontag [11, Chapter 3 Theorem 4] it is easy to check that (6.4) is controllable
and observable when h is not an integer multiple of π . The term cAdj(Ah )bh is also
easy to compute (compare with (5.4)); it is given by cos(h) − 1. A consequence of
our result is that whenever h is not an integer multiple of π the system (6.3) can be
nullified via h-sample linear output feedback.
A peculiarity of this particular example (documented already in Kabamba [5,
Example 1]) is that although (6.3) cannot be stabilized utilizing a continuous output
stationary feedback, when h = j π (for an integer j ), the h-sample output stationary
feedback given by u = 21 cx stabilizes the system.
A Concluding Remark 6.5. The analysis carried out in this section together with
the preceding discrete time analysis enhances previous perceptions concerning the
power of the sampled data method. Linear time-invariant systems which cannot be
stabilized via memoryless output feedback (like the harmonic oscillator) can in fact
be nullified with an h-sampling linear output feedback for almost any h. The explanation of the phenomenon is that sampling is a sort of dynamic feedback which is
weaker than the general dynamic feedback which utilizes an auxiliary equation, but
stronger than the time-varying feedback which exploits an auxiliary clock. Indeed,
during the period between samplings the feedback strategy exploits a weak form of
memory, namely, it remembers the state at the beginning of the period. This kind
of memory, as shown in the paper, is quite powerful when it comes to stabilization
and nullification issues.
References
[1] Aeyels D, Willems JL (1991) Pole assignment for linear time-invariant second-order systems by
periodic static output feedback. IMA J. Math. Control Inf., 8:267–274
[2] Aeyels D, Willems JL (1992) Pole assignment for linear time-invariant systems by periodic memoryless output feedback. Automatica, 28:1159–1168
[3] Brockett RW (1999) A stabilization problem. In Blondel VD, Sontag ED, Vidyasagar M, Willems
JC (eds) Open Problems in Mathematical Systems and Control Theory. Springer, London, pp. 75–78
[4] Hohn FE (1964) Elementary matrix algebra, 2nd edn. MacMillan, New York
[5] Kabamba PT (1987) Control of linear systems using generalized sampled-data hold functions. IEEE
Trans. Automatic Control, 32:771–783
[6] Leonov GA (2001) Algorithms of linear nonstationary stabilization and the Brockett problem.
J. Appl. Math. Mech., 65:777–783
[7] Leonov GA (2002) The Brockett problem for linear discrete control systems. Automatic Remote
Control, 63:777–781
[8] Mita T, Nam KT (2003) Time varying deadbeat control of high order chained systems. Asian J
Control, 5:316–323
[9] Moreau L, Aeyels D (1999) Stabilization by means of periodic output feedback. In: Proc IEEE
conference on Decision and control, Phoenix, AZ, pp. 108–109
Z. Artstein and G. Weiss
[10] Moreau L, Aeyels D (2000) A note on stabilization by periodic output feedback for third order systems. In: Proc 14th Int Symp Mathematical Theory of Networks and Systems (MTNS), Perpignan
[11] Sontag ED (1998) Mathematical Control Theory: Deterministic Finite Dimensional Systems, 2nd
edn. Springer, New York Berlin Heidelberg
[12] Stewart GW (1998) On the adjugate matrix. Linear Algebra Appl., 283:151–164