Necessary conditions for the embeddability of discrete

Necessary conditions for the embeddability of discrete-time
state-wise monotone Markov chains.
Marie-Anne Guerry
MOSI
Vrije Universiteit Brussel
Pleinlaan 2, B-1050 Brussels, Belgium
e-mail: [email protected]
November 25, 2012
Abstract
State-wise monotone probability matrices are important in Markov models with states that
are ordered.
The embeddable problem for discrete-time Markov chains is interesting in case observations
are available regarding a certain time unity and there is a lack of information regarding time
intervals with length less than 1.
In this paper the embeddability within the set of state-wise monotone probability matrices
is examined. More specific, for state-wise monotone probability matrices of order (2 × 2),
(3 × 3) or (4 × 4), the trace at least equal to 1 is proved to be a necessary condition for the
embeddability regarding time unity 0.5.
Key words. Embeddable problem; State-wise monotone transition matrix.
1
Introduction
In previous work the embeddable problem is examined for discrete-time Markov chains without
requiring specific properties for the involved transition matrices. There are formulated conditions
under which there exists a Markov chain that is compatible with the initial one, and for which the
time unity is less than for the initial one. (([6]), [3], [2]).
For a Markov chain with transition matrix P the embeddable problem regarding time unity
0.5 concerns the problem of whether or not there do exist a probability matrix A satisfying
P =A×A
Such a matrix A is then a square root probability matrix, i.e. a probability matrix that is a square
root of P .
For an embeddable Markov chain with transition matrix P there do exist transition matrices A
regarding time unity 0.5 that are compatible with P . Nevertheless it can be that the square root
probability matrices A do not necessarily reflect the intrinsic properties of the modeled system.
If this is the case, in the identification phase no solution would be in correspondence with the
characteristics of the system. Therefore it would be interesting to know necessary conditions that
guarantee the existence of a probability matrix that is a square root of P and that simultaneously
reflects the intrinsic properties of the system.
1
Intrinsic properties have to be taken into account in the specific situation of a Markov model
with states that are ordered. This is for example the case for a hierarchical graded manpower
system, when the states are defined by wage intervals ([4]) or by increasing levels of job-authority
([7]). For those systems the transition probabilities are the elements of what is called a statewise monotone transition matrix. For an embeddable state-wise monotone Markov chain, also the
compatible Markov chain is wanted to be state-wise monotone.
This paper deals with necessary conditions for embeddability regarding time unity 0.5 within
the subset of time-discrete Markov chains that are state-wise monotone. Necessary conditions are
proved for the existence of square root probability matrices that are state-wise monotone, and this
for Markov chains with 2, 3 or 4 states.
2
Necessary embeddability conditions for state-wise monotone Markov chains
In several Markov models the states are ordered. This is for example the case for a hierarchical
graded manpower system. For a manpower system with ordered states, the transition matrix is
expected to be a state-wise probability matrix. A state-wise probability matrix A is a probability
matrix satisfying the conditions:
(1) aij ≥ aik for all i < j < k
(2) akj ≥ aki for all i < j < k
(3) ai,i+1 ≥ ai,i−1 ; ai−1,i ≥ ai+1,i and ai,i+1 ≥ ai+1,i
In [5], Singer and Spilerman distinguish three stages: the embeddable problem, the inverse
problem and the identification problem. The identification problem deals with selecting an appropriate root from alternatives based on certain criteria, in case the Markov chain is embeddable
in more than one Markov chain. The situation is that not necessarily all the probability roots
correspond to the characteristics of the manpower system.
Another approach to tackle this problem is restricting the whole discussion from the start to a
subset of probability matrices that reflect the characteristics of the manpower system. In that way
solutions within this set result in acceptable quantifications of the characteristics of the manpower
system involved. According to this approach, in what follows the probability roots are restricted
to state-wise monotone probability matrices.
Theorem 2.1 formulates, for (2 × 2), (3 × 3) and (4 × 4) transition matrices P , a necessary
condition for embeddability of P in a state-wise monotone Markov chain regarding time unity 0.5.
In the situation that there exists a square root probability matrix A that is state-wise monotone
and compatible with P , it is proved that the condition tr(P ) ≥ 1 holds.
Theorem 2.1 For a probability matrix P of order (2 × 2), (3 × 3) or (4 × 4) and for which there
exists a square root that is a state-wise monotone probability matrix, holds that tr(P ) ≥ 1.
Proof
(1) For a (2 × 2) probability matrix P with probability square root A = (aij ) the trace of P
equals:
tr(P ) = tr(A × A)
= (A × A)11 + (A × A)22
= (1 − a12 )2 + a12 a21 + (1 − a21 )2 + a12 a21
= 1 + (1 − a12 − a21 )2
≥1
2
(2) For a (3 × 3) probability matrix P with probability square root A = (aij ) the trace of P
equals:
tr(P ) = tr(A × A)
= (1 − a12 − a13 )2 + (1 − a21 − a23 )2 + (1 − a31 −a32)2 + 2a12 a21 + 2a13 a31 + 2a23 a32
=
)2 − 2a21 (1 − a12 − a13 ) + (a21 )2 + (1 − a31 − a32 )2 − 2a23 (1− a31 − a32 ) + (a23 )2 +
(1 − a12 − a13
(1 − a21 − a23 )2 + 2a21 (1 − a12 − a13 ) − (a21 )2 + 2a23 (1 − a31 − a32 ) − (a23 )2
+2a12 a21 + 2a13 a31 + 2a23 a32
= 1 + (1 − a12 − a13 − a21 )2 + (1 − a31 − a32 − a23 )2 + 2a21 a23 + 2a13 a31 − 2a13 a21 − 2a23 a31
This expression can be rewritten in the form
tr(P ) = 1 + (1 − a12 − a13 − a21 )2 + (1 − a31 − a32 − a23 )2 + 2(a23 − a13 )(a21 − a31 )
For a state-wise monotone matrix A holds that a23 − a13 ≥ 0 as well as a21 − a31 ≥ 0 and consequently tr(P ) ≥ 1.
(3) In an analogous way for a (4×4) probability matrix P with probability square root A = (aij )
the trace of P can be expressed as:
tr(P ) = tr(A × A)
= (1 − a12 − a13 − a14 )2 + (1 − a21 − a23 − a24 )2 + (1 − a31 − a32 − a34 )2 + (1 − a41 − a42 − a43 )2
+2a
12 a21 + 2a13 a31 + 2a14 a41 + 2a23 a32 + 2a24 a42 + 2a34 a43 = (1 − a12 − a13 − a14 )2 − 2a31 (1 − a12 − a13 − a14 ) + (a31 )2
+ (1 − a21 − a23 − a24 )2 − 2a32 (1 − a21 − a23 − a24 ) + (a32 )2 + (1 − a41 − a42 − a43 )2 − 2a34 (1 − a41 − a42 − a43 ) + (a34 )2
+ (1 − a31 − a32 − a34 )2 + 2a31 (1 − a12 − a13 − a14 ) − (a31 )2
+2a32 (1 − a21 − a23 − a24 ) − (a32 )2 + 2a34 (1 − a41 − a42 − a43 ) − (a34 )2
+2a12 a21 + 2a13 a31 + 2a14 a41 + 2a23 a32 + 2a24 a42 + 2a34 a43
= 1 + (1 − a12 − a13 − a14 − a31 )2 + (1 − a21 − a23 − a24 − a32 )2 + (1 − a41 − a42 − a43 − a34 )2
+2(a21 − a31 )(a12 − a32 ) + 2(a31 − a41 )(a34 − a14 ) + 2(a32 − a42 )(a34 − a24 )
For a state-wise monotone matrix A the last three terms are nonnegative and therefore tr(P ) ≥ 1.
Which proves the Theorem.
2
3
Further research questions
In case the states are ordered, state-wise monotonicity is a good assumption to impose on a Markov
chain. In this paper a necessary condition for embeddability within the set of state-wise monotone
Markov chains is provided in case the number of states equals 2, 3 or 4: The transition matrix P
of a Markov chain that is embeddable in a Markov chain that is state-wise monotone regarding
time unity 0.5, satisfies tr(P ) ≥ 1. The state-wise monotone Markov chains form an important
category of Markov chains in practice. It is a challenge for further research to find necessary
embeddability conditions for state-wise monotone Markov chains in general, and more specifically
in case of more than 4 states.
In this paper necessary conditions for the embeddability in state-wise monotone Markov chains
are provided. Depending on the nature of the system, possibly another type of characterization
for the transition matrix applies. In practice it is important to know sufficient conditions for the
probability roots to reflect this characterization. The challenge for further research is to describe
necessary conditions for embeddability within other classes than state-wise monotone Markov
chains. The monotone Markov chains form another class of Markov chains that is important in
practice, for example in mobility models ([1]). Future research should deal with the embeddability
problem within monotone Markov chains.
3
References
[1] Conlisk, J. (1990). Monotone mobility matrices. The Journal of Mathematical Sociology. 15(3-4), 173–191.
[2] Guerry, M.A. (2012). Probability square roots for (2 × 2) transition matrices. Working paper MOSI/50.
[3] Higham, N.J., Lin, L. (2011). On pth roots of stochastic matrices. Linear Algebra and its Applications 435,
448–463.
[4] Pamminger, C., Tuchler, R. (2011). A Bayesian analysis of female wage dynamics using Markov chain
clustering. Austrian Journal of Statistics 40(4), 281–296.
[5] Singer, B., Spilerman, S. (1974). Social mobility models for heterogeneous populations. in Sociological
Methodology edited by Costner H. L. 5, 356–401.
[6] Singer, B., Spilerman, S. (1976). The representation of social processes by Markov chains. The American
Journal of Sociology 82, 1–54.
[7] Zeng, Z. (2011). The myth of the glass ceiling: Evidence from a stock-flow analysis of authority attainment.
Social Science Research 40(1), 312–325.
4