KasparHans1969

SAN FERNANDO VALLEY STATE COLLEGE
METHODS FOR A TRANSIENT SOLUTION
,
TO A QUEUEING PROBLEM
A thesis s ubmitted in partial satisfaction of the requirements
for the degree of Master of Science in
MATHEMATICS
by
Hans Kaspar
September 1969
The thesis of Hans Kaspar is approved:
San Fernando Valley State College
September 1969
ii
A CKNOWLEDGMENT
The author wishes to express his appreciation to Assistant Pro­
fessor A . Banos for his valuable advice and suggestions during the con­
ceptual phase of this thesis .
Also,
I
would like to thank Professors
E. Ostrow and W . Karush for their active support and encouragement to
develop the mathematical techniques for analysis of this queueing
problem.
iii
CONTEN T S
Page
l.
INT RODU CT ION . . .
2.
D I S CU S S ION O F T HE P ROBLEM .
3.
4.
5.
4
2. 1
Development of the Problem
4
2.2
Formulation of the Mathematical Model . .
5
2.3
The Moments of the Number of Parked Cars as a
. . . .
Function of Time
8
MET HODS O F SOLU T ION .
12
3. 1
General Remarks . .
12
3.2
Solution by the Laplace Transform .
12
3.3
Solution by the Eigenvalue-Eigenvector Method.
16
3. 4
Solution by the Method of a Probability Generating
Function . . . . . . . . . . . . . . . . . . . .
22
3.5
Solution by the Method of a Weighted Probability
Generating Function .
. .
27
3.6
Solution for the Case of N
.
=
32
APPEND IX
38
4. 1 The Stay-Time and Arrival Distribution Functions
38
4.2
The Probability Mass Functions of the Number of
Car Arrivals . . . . . . . . . . . . . . . . . .
40
4.3
Properties of the Transformed Transition Probability
Matrix . . . . . . . . . . . . . . . . . . . . . . .
41
4.4
The Partial Derivatives of the Weighted Probability
Generating Function
46
B IBL IOG RAP HY. . . . . .
�
.
.
.
.
iv
49
AB S T RA C T
MET HODS FO R A T RAN S IEN T SOLU T ION
TO A QUEUE ING P ROBLEM
by
Hans Kaspar
Master of Science in Mathematics
September 1969
The transient soluti6n of a parking lot problem with a finite num­
ber of parking spaces is considered when, in a given time interval, the
number of cars arriving are Poisson distributed with identical para­
meters .
If the parking lot is full, all arriving cars depart immedi­
ately; thus, allowing no queue to be formed .
The system is expressed in terms of its transition probabilities
which form a finite system of linear differential equations .
New
methods involving the application of generating functions are developed
and described along with some standard methods for solving the tran­
sient queueing problem .
v
1.
IN T RODU C T ION
In recent years, much interest has been devoted to queueing prob­
lems.
As a consequence, a considerable amount of theoretical litera­
ture has been written on the sub ject.
This thesis will study, in de­
tail, several methods of solution to a particular queueing situation;
namely, a parking lot problem with no queue allowed.
It is hoped that
many pertinent concepts of theory will be brought forth and clarified.
When one refers to a queue, the implication is that there exists
a line-up or file of individuals, automobiles, animals, etc.
Thus, a
queue is often called a waiting line, whose behavior is dynamically
that of a larger system.
This larger system may be broken down into
two basic stages:
•
An input stage, into which items are fed into the system
•
A servicing stage wherein the items in the system are
serviced in some manner and then discharged from the
system
The waiting line mentioned previously refers to the input supply
which arrived in the system but has not yet been serviced.
Queueing
problems, then, are concerned with the study of the waiting line and
the number of units in the system as it fluctuates and changes with
time.
To study this dynamic system, we need to specify three phases
of the operation.
a)
Input Process
This process refers to the individuals arriving in the system and
is specified by' the probability mass function, Qk (t), where k is the
number of arrivals occurring within the time interval t.
1
In this
2
problem, it is assumed that the interval of time between successive
arrivals at the parking lot is a random variable, whose distribution is
a negative exponential.
Thus, during the period t, the number of ar­
rivals is given by the Poisson probability mass function
where
>..
is the average arrival rate.
The various properties of the
Exponential and Poisson distributions will be discussed in the Appendix
Sections 4. 1 and 4.2.
b)
Queue Discipline
This refers to the rule by which units are placed to form a queue,
and the manner in which they are moved while waiting.
In this study,
if the parking lot is full, an arriving car will depart immediately,
therefore, a queue will not be formed.
c)
Service Mechanism
This not only refers to the manner in which a unit is serviced,
but also to the time required to service an individual . That is, many
problems reflect the case of multiple channel service situations where­
in there are several servicing agents in parallel, series, or both.
The parking lot problem is considered to be a multiple parallel channel
service situation, each channel being a parking lot space with a cor­
responding stay-time distribution function.
is the negative-exponential distribution
The stay time, t, of a car
3
where 1 /� is the average stay-time of a car parked on the lot .
times, on� wishes to know the probability that
t,
parked cars depart
In this problem, the number
within an interval of time of duration t.
of departures,
t
Some­
may also be given by the Poisson probability mass
function
t
=
0, 1 , 2...
The system will be assumed to be a collection of discrete ob jects;
namely, cars.
The number of cars in the system is denoted by n, a dis­
crete set of numbers (n
=
0, 1 , 2,... , N), where N is the total number
of parking spaces.
The characteristic of the system as it fluctuates with time is
characteri zed by P (t)
n
=
Prob (n cars parked at time t).
Some attri­
butes, defining system behavior, are the number of empty parking spaces,
the number of cars parked, and the probability that the parking lot is
full.
The two most frequently studied characteristics are the first
two moments of the parked cars, which serve as measures of system
fluctuation.
This thesis will be concerned with the number of parked cars
under transient conditions and will develop several methods of solution
that describe the process for any tfme . Still; some attention will be
devoted to the steady-state solution which is the solution for the num­
ber of cars parked when t approaches infinity.
Thus, the random vari­
able, number of cars parked, which is described by the probability mass
function, P (t) (where n
n
=
0, 1 , 2,
.
.
.
, and P (O)
n
=
1 ), denoting the
probability of n cars parked at time t, will be studied together with
its moments when investigating these different methods of solution.
2.
D I SCU S S ION O F T HE P ROBLEM
2 . 1 DEVELOPMENT O F T HE P ROBLEM
A parking lot consists of N parking spaces with one or more
entrances or exits.
It is a dynamic system described by the charac­
teristic, Pn (t), which represents a probability mass function of time.
The number of parked cars form a stochastic process {X (t), t
E
T} in­
dexed by the time parameter, t, in an index set T . A typical sample
function, X (t), is graphically shown below.
N
Ul
s..
ro
u
40
.
0
z
time, t
For a given point in time, the random variable, X (t), is a
discrete-valued function with the number of cars par ked as its values.
The probability mass function of X (t) is P (t), which, represents the
n
probability that n cars are in the par king lot at time t.
tionship may be expressed, in standard notation, as
4
This rela-
5
The ob jective of this thesis then is to develop or demonstrate
techniques- by which the probability mass function, P (t), or the mon
ments may be obtained .
2.2
FO RMUL A T ION O F T HE M A T HEM A T I CAL MODEL
Cars that arrive at the parking lot follow a Poisson distribution
function, Q (t), in time with some known average arrival rate A. Ar­
k
riving vehicles park as long as space is available; however, if the
parking lot is full, arriving cars will immediately depart.
The stay­
time of the parked cars form a negative exponential distribution func­
tion, F(t), with some known average departure rate, ll· A more precise
formulation is as follows:
Let N stand for the total number of parking spaces.
The parking
lot can be described by noting the number of cars parked, and hence,
the number of spaces still empty.
Let E (where n
n
denote the state of having n cars parked.
=
0, 1, 2,... , N)
It is assumed that the sys­
tem changes only through transitions to adjacent states (from En to
E
n+l if l�n�N - 1, from E0 to E 1, and from EN to EN _ 1) . To derive an ex­
pression for P (t + �t), the probability of being in state E n at time
n
t + �t, it is necessary to consider only simple transitions of a single
car.
Remember that the parking lot consists of·N parking spaces with
equal exponentially distributed stay-times each with a mean departure
ll· The probability that a particular parked car departs within a time
- �t
interval of length �t is o (�t) = Jl �te ll "'Jl�t, when neglecting
1
higher powers of �t. However, if n�N cars are parked, then the probability that any one of the cars will depart in time �t is nJl�t, since
any one of the n parking spaces may become empty during this time
6
interval 6t .
The probability, therefore, that none of the
n
parked
cars depart in the time interval 6t is 1 - nw6t to first order in 6t.
The probability that a car arrives in the interval 6t is Q 1 (6t)
A6te-A6t�A6t and the probability that no car arrives during the same
time interval is Q0 (6t)
=
=
-A6t�l
- A6t, when neglecting higher powers
e
of 6t.
The various possibilities and their probabilities (to terms of
order 6t), for 1,2!1.::,. N - 1 are:
a)
At time t, the system is in state En, with probability P (t),
n
and during {t, t + 6t) no changes occur with probability ( 1 - A6t)
2
( 1 - nA6t)�l - A6t - nw6t + 0 (6t ).
b)
At time t, the system is in state En-l, with probability Pn_ 1 (t),
and a transition to E
c)
occurs with probability A6t.
n
At time t, the system is in state En+l, with probability
Pn+l (t), and a transition to En occurs with probability
(n + l)w6t .
Since the above contingencies
events,
E
n- 1
,
E
n
, and E
n +l
are mutually exclusive
( 1)
+ P
n+l (t) (n+l)w6t .
Proceeding similarly for the special cases n
=
0 , N results in
(2)
and
(3)
7
Expressions (1 ), (2) , and (3) can be used to determine an associated
system of _linear differential equations in the usual manner:
(6)
It is the purpose of this thesis to explore and develop techniques
to solve this system of differential equations and other associated
problems.
It should be noted here that the differential equations ( 4),
(5), and (6) are also known as the "forward" equations because the
equations �"ere derived by prolonging the time interval [O,t] to
[O,t
+
6t] and considering the possible changes during the short time
interval [t,t + 6t].
One could as well have prolonged the interval
[O,t] in the direction of the past and considered the changes during
[-6t,O].
In this way a new system of differential equations, known as
the "backward" equations, is obtained.
Standard methods are applied
for solving the system of linear differential equations ( 4), (5), and
(6).
In Section 3.2, the Laplace transform procedure of the differen­
tial equations is described in detail.
The transformation of the dif­
ferential equations results in a system of rational functions.
To in-
vert the system of rational functions, each function is expanded in
terms of partial fractions, thereby reducing each term to a known form .
The Eigenvalue - Eigenvector method is used in Section 3.3.
In the
Appendix, Section 4.3, it is shown that the roots of the characteristic
equation of the matrix of the transformed transition probabilities are
all distinct.
This simplifies the canonical decomposition of the
8
matrix of the transition probabilities.
Then, given the roots of the
characteristic equation, the probability mass function of the number of
parked cars is obtained .
Finally, methods involving the use of prob­
ability-generating functions (which is a special case of the method of
characteristic functions) are developed in Sections 3 .4 and 3.5 for the
transient queueing problem.
These methods are applicable to discrete
random variables such as the number of cars parked which assumes only
integral values.
2.3
T HE MOMEN T S O F T HE NUMBER O F PARKED CAR S A S A FUNCT ION
O F T IME
The moments of the probability mass function Pn (t) are the ex­
pected values of the powers of the random variable X (t) representing
the number of parked cars at time t.
The k -th moment of X (t) is de­
noted by
N
k
M (t) = L n P (t) , k = 0, 1, 2,
k
n
n=O
In particular M0 (t) = 1.
.
.
.
(7)
For a given t, the moments are a set of de­
scriptive constants of P (t) which are useful in measuring the prop­
n
erties of Pn (t) or possibly for specifying it . For this purpose,
knowledge of P (t); equivalent, t �at is, in the sense that
n
it should be possible theoretically to exhibit all properties of Pn (t)
therefore,
a
in terms of its moments . As is shown in Section 3 .5, equally impor­
tant is the fact that, given the set of moments of Pn (t), one can
uniquely determine the probability mass function P (t) in terms of its
n
moments M (t), k = 0, 1, 2, . .
k
.
.
9
Differentiating (7) with respect to time results in
N
I
k
Mk (t) = L: n P 1 (t).
n
n=O
(8)
Multiplying (5) by n, and summing for n = 1, 2,... , N- 1 and adding (6),
after multiplying it by N, one obtains:
=
±
j=l
j
j
(-l) C � [J.lM2_. (t) - AM _ . (t) + AP (t) Nl- ]
N
1
J
J
J
where C� is the binomial coefficient:
J
j
C � = (k- � 1•1
J
J .J.
multiplied by n2 and summed over n = l, 2, .. , N- 1.
.
2
multiplying it by N , results in:
=
For k=2, (5) is
Adding (6), after
2
j
L: (-l) C� [J.lM 3 . (t) - AM2 - . (t) + APN (t) N2- j].
J
J
-J
j- 1
In general, it can be shown easily by induction that the derivative of
the k-th moment is:
For k=l, M (t) satisfies a first-order linear differential equation
1
whose solution is:
10
To be
determined in Section 3.1 .1 , P (t) , the probability that the
N
parking l9t is full , is given by:
N
r .t
N 1
PN(t) = :E BN, . e '
1
i=O
where the BN ,1. •s are constants and the r , .•s are real negative numN 1
bers; therefore, M (t) reduces to
1
N
r .t
- t
t
- t
M (t) = �c(l - e ll
)/Jl- lc L B . (e N '1 - e-ll
)/(r , . + Jl) + Ce ll.
1
N,
1
N
1
O
i=
But
=
N
,E
n=O
nPn(O) = 0 since at t=O , {Pn(O)} = {l, 0, ..., jO};
therefore, C=O,
and
1c
r .t
N
N
L BN '1. (e '1
i =0
I
-J t
e l ) I (rN,
.
1
_
+ Jl) .
Similarly, for k=2, M2 (t) is a first-order linear differential equation
whose solution is
B ut M2 (0) = 0 which implies C=O, thus
11
�
In general, M (t) is a first-order linear differential equation with
the initial condition that M (O) = 0; therefore, the partitular solu­
k
tion is:
t
k
j k
k�x
M (t) = �e-k�t L: (- 1) C .. M
dx
_ . (x)e
k
k+l
J
J 0
j=2
f
k
j k
k t
+ >.e- � L: ( 1 ) C.
J
j=l
-
for k = 1, 2, 3, . ..
. k�x
£t [-Mk- . (x) + PN (x)Nk-J]e
dx, ( 10)
J
3 . ME T HO DS O F PRO BLEM SOLU T ION
3. 1
GENERAL REMAR K S
The methods for solving the system of differential equations or
its transform are several.
No particular attention is given to the
order in which these techniques are investigated.
In all cases, some
numerical analysis is required to obtain numerical results.
3.2
J
0
oo
SOLUT ION BY LA PLA CE TRANS FORM
*
*
Letting P (s) denote the Laplace transform of Pn (t) where Pn (s) =
n
st
e P (t) dt, then by definition, the Laplace transform of P (t) is
n
�
Applying this transform to equations ( 4), (5), and (6) results in a
system of algebraic equations:
(1l)
*
*
Pn (s) (s + A + n�) - P _ (s) A - P �+ 1 (s) (n + 1)�
n 1
=
0; l<n<N
( 12)
( 13)
where P0 (o)
=
1,Pn (O)
=
0; n
=
1, 2, ... , N are the initial conditions.
In matrix notation, the equations become
*
*
S P (s)
N
=
P(O)
*
*
where P (s) and PTOT are two column vectors [P0 (s), ... , P (s)] and
N
[ 1, 0,... , 0] respectively, with the latter giving the initial con­
ditions.
The transformed transition probability matrix:
12
( 1 4)
13
S+A
-
A
0
-�
(s + A + �)
-A
0
0
-2�
(s +
A
+ 2�)
0
0
0
0
-3�
.
0
N�
s + N�
0
is a tridiagonal matrix.
l
Supposing that the inverst SN- exists, toen from (14) one obtains
( 15).
Since the initial condition vector PT0T contains all z eros except for
1, the first column vector of S -l is of interest only. The
N
elements of this column vector are the cofactors c0n (s); n = 0, 1 ,
P0 (0)
=
divided by the determinant
N
By replacing each element of the first row by the sum of its
2,. . . , N, of the first row vector of S
! S I.
N
corresponding column, �e sees that !S I has s
N
=
0 as a root, thus ! S I
N
can be written in the form s T (s), where T (s) is an N-th degree poly­
N
N
nomial in s, and
*
P (s)
CTST
=
where C (s) is a column vector
Both c0n (s) and s T (s) are polynomials, the order of s T (s) being
N
N
greater than the order of c (s), n
0N
=
0, 1, 2, ... , N.
In Section 4 .3,
it will be shown that s T (s) has (N+l) distinct real roots of which
N
14
Th us, sT (s) is completely factorable
N
N are negative and one is zero.
into the unrepeated linear factors:
(s- rN,O), (s - r N,l),..., (s - r N,N), (rN,O = 0)
�
-1
which implies that the inverse Laplace transformP P (s) is
=
N
i�
cO,n(rN,i)e
r
t
,
N .
n = 0 , 1 ,... , N
,1 /(rN,iTN(r N,i));
rN .t
N
1
= 2:· B N,1.e '
i=O
(16)
where
8N,i = c O,n(rN,l)/(rN,iTN(rN,i))'.
B y investigating sT (s), note that for N = 1
N
2
sT (s) = s + (\ + v) s,
1
and for N = 2
sT2 (s) = s3
+
+
(\
(2\
+
+
2\v
2
3v) s
+
2
2
2v ) s.
In general, it will be shown in Section 4.2 that for any N>2
= s[T N_1(s) (s
where
+
\ + Nv) - (N-1 )\v TN_2(s)] (17)
15
and
k
N N+ick . L a(m) ., ._ 1; j = 1, 2, 3, . . .
- J m= j
J J
where a(N)
O,O
=
= N!
1, and a(N)
N,N
Thus, for any N, s T (s) can be constructed and its roots determined
N
numerically with a digital computer.
Similarly, to obtain values for c (s) at s = r
, i = O,.. .N
0n
N,i
N
first one notes that c0N(s) = A . That is, the cofactor c0N(s) reduces
N
to a triangular matrix whose determinant is A . By triangulari zing
each cofactor, one easily obtains, by iteration, the following result:
N
II
j=n + 1
b . . (s); n = 0, 1, 2,. . , N- 1
J, J
.
( 18)
where
and
bn+k,n+k (s) = s + A + (n+k)� - (n+k)A� I bn+k-l,n+k-l (s).
Finally,
n
= A
N
n b . .(r .); i = O, l,.. , N .
j=n+l J, J N '1
( 19)
l)
To obtain the steady state solution for P (t), MOR SE ( , for fixed
n
N, has shown that
16
t -+oo
but
lim
sP
s-+0
*
(s) = lim P (t)
n
t -+oo
therefore,
*
lim sP (s) = lim s c0,n(s)/sT (s) .
N
n
s-O
s-+0
:=or examp 1 e,
N
i
= (A/ll)N/N! L (A/ll) /i!
i=O
In general, the steady state solution for Pn(t) is
N
i
n
Pn = (A/ll) /n! L (Ahd /i!
i=O
3.3
SOLU T ION BY T HE E IGENVALUE-E IGENVE C TOR MET HOD
To obtain a solution to the system of linear-differential equa-
tion directly, it is advantageous to consider �he system in matrix
notation .
That is, if
P'(t) = [P 6(t), P 1 (t),.. ., P N(t)]
and
17
are two column vectors, then the equations can be written as:
P1(t)
=
AP\tT
(20)
where
A
=
0
�
-A
A
- (A+�)
0
0
2�
- (A +2�)
A
0
0...
3�
0
0
0
0
0
If one can show that A is a matrix with distinct eigenvalues, then A
is similar to a diagonal matrix � of these eigenvalues and a new system
of linear differential equations with separated variables is obtained,
that is, the n-th equation of the new system contains only one variable.
To show that the eig envalues of A are distinct, it must be shown
that the roots r of the equation
I A-ri i
=
0
are distinct.
In Section 4.3, it is shown that the polynominal sT ( s )
N
has N distinct negativ e real roots and one root with a value of zero.
But
,... , rN, of A are distinct as
therefore, the eigenvalues rN,O 'r
N,l
N
shown in Section 4.3.
Thus A is similar to the diagonal matrix � of
the eigenvalues; that is
18
(21)
where MR
=
[ a0, a1 , . . ., aN] is an eigenvector matrix (i.e., the col­
umns a j of MR are the column eigenvectors of A).
To separate the variables P (t), one notes first that
n
(22)
or
(23)
The from (20),
(2 4)
Let
then
U' (t)
=
(M
j/PTtTI
=
I
r2DN .
Thus, the new variables are separated, where U (t)
n
solution can be written directly as
1
�,1R PTtT
=
=
where
DTET
=
=
(25)
r U(t), and the
n
t
erl DTOT
erlt M l PTOT '
R
(26)
19
rl
=
Hence
(27)
where
P(O)
=
[ 1, 0, 0, ..., 0] is a column vector.
In order to obtain the solution (27) , it is necessary to obtain the
eigenvalues r j and the corresponding eigenvectors a (the columns of
j
N,
MR ). The procedure is straightforward and is given below. To obtain
MR1 for use in (27) , rather than inverting MR, it is more efficient to
obtain the left eigenvector matrix ML of A (MR being the right eigen­
vector matrix of A) which, except for an appropriate
shown to be equal to MR 1
It should be noted that the rows B j of ML
=
scalar, can be
[s0, s 1,..., BN] are
the left-eigenvectors of A in contrast to the columns a j of MR.
In writing equation (22) with the MR matrix replaced with the left­
eigenvector matrix ML ' one obtains
But from (23) , one sees that MR
scalar multiplier of ML .
Thus,
1
=
ML' up to an appropriate constant
20
(29)
To obtain
let [a0 . , ... , aN ] = a· be the jth column
J
'J
,J
and [s
,... , sj,N] = s be the jth row vector of
j,O' s j,l
j
and
M ,
R
vector of
M
R
M '
L
.
The matrix equation
M .
L
Aa
=
ra
sA
=
rs
and
have solutions a and s other than the zero vector since r is an eigenvalue of the matrix A.
aj
=
Consider Aa
[aO j' al j''"', a j].
N
J
.
=
r J.a. where
N, J
This matrix equation is equivalent to the
following system of linear equations:
->-ao . + )Jal
,J
,J
.
=
r . a .J
N ,J D ,
(30)
r ,. a
N J n ,J
•
(3 1)
(32)
To solve (30), (3 1), and (32), let a0 .
,J
a ,
l J
.
=
(rN ,
.
J
a .
n, J
Finally, for n
=
N
+ >.) I 11
,
=
l, then
21
Similarly, for s A
j
1et sj
=
�
=
�
=
r j sj where sj
N,
[ 1, sj 1 , sj,2,
,
.
.
•
, Sj,NJ
[sj,O' sj,l,
.
•
.
, sj,NJ. First
where
SJ· ' O
Sj,l
sj,n
and
�
sJ. N
'
=
1
=
(rN,l+ A)/A
=
=
}
l<n<N
. (rN�j + A + (n- 1)�) sj,n- l - (n- 1)� sj,n- 2 /A;
{
N� s . N- /(rN, . + N�)
J
J' 1
is the solution of the s ystem of equations:
n �sj,n
-1
-AS· O + AS· 1
J'
J'
=
r , . S· 0
N J J'
( >. + nJ.l) Bj,n + AS· ,n+1
J
=
r . sj,n' 1:Sn<N
N ,J
�
�
NJ.lSj,N = rI.� ,J·S.J ,•N
NJ.lsj,N-1
To obtain the row vector sj' note that
SJ.
therefore,
and hence,
•
a
j
=
N
2: SJ '1
-1 s
K. j
that is, S.
J
or
.
aj
k
=
J
'
•
.
•
i=O
=
a. .
1
,J
=
1'
-1 - k i
K . SJ.'
J
K. ;
J
=
0, 1 , 2,..., N
22
.
13J_
=
[
s.1 .:..J.
s.2 .z£
1- .:l!!.
K.' K. ' K .,
J
J
J
-
-
S
•
•
•
·N
, �
K.
- ]
J
Since PTOT = [ 1, 0 ,... , 0] , from Equation (29)
N
Pn (t) = L: a .a . 0e
j=O n,J J ,
r .t
N,J
n = O , l , ...,N .
(33)
3.4 SOLU TION BY T HE ME T HOD O F T HE PRO BABILI TY GENERA TING FUN CTION
The probability generating function of P (t) is defined by:
n
(34)
Note that P (l ,t) = 1 and P (O,t) = P0 (t) . In addition P ( z,O) = 1
satisfies the initial condition at t = 0; for , P (O) = 0 , n = 1 , ..., N
n
and P0 (o) = 1.
Differentiating Equation (34) with respect to z results in
N
aP ( z ,t) =
L Pn (t) z n- 1
az
n=O
(35)
P 1 (t) can now be obtained by evaluating the partial at z = 0; that is,
aP (O,t) =
dZ
p
1 (t) .
(36)
In genera 1,
(37)
23
or
n
a P(O,t) - P (t ) . n - l 2,. . . , N
,
�1
n
n
az
1
_
·
_
·
which is the desired probability mass function.
By multiplying ( 4), (5), and (6) by appropriate powers of z, one
obtains for n = 0, 1, 2, . . . , N:
P �(t) = -AP0(t) + �P1(t)
�
zP (t) = ZAP0(t) - Z(A + �) P1(t) + z2�P2(t)
'
ZN-lp (t) = z N-lAP _ (t) - z N-l (A + {N-1)�) P _ (t) + ZN-l N�P (t)
N-1
N
N 1
n 2
N
N
N
z P (t) = z APN_1(t) - z N�P (t) .
N
�
Summing the above equations over n results in:
N
N
N-1
:E P �(t)z n = A :E Pn=l(t)z n - A :E Pn(t)zn n=O
n=l
n=O
ll
N
:E nPn(t)zn
n=l
But
N
'
:E pn (t)z n = aP(atz,t)
n=O
N
z ,t
aP__,(_
"'--'- ) and,
:E nPn(t)z n-1 = _
az
n=l
24
N
remembering that � P (t ) z
n
n=O
differential equation;
n
= P ( z ,t ), then (38) reduces to the partial
Since the Laplace transform of P ( z,t ) is:
.
N
*
*
n
-�{P ( z,t ) } = P ( z,s ) = L Pn {s)z
n=O
( 40)
and
t£
aP ( z,t )
at
*
sP ( z,s) - P ( z, O )
=
*
= sP ( z,s) - 1 ,
( 4 1)
the partial differential equation reduces to a first-order linear differential equation
*
*
*
*
aP
( z,s)
sP ( z,s) - 1 = A ( z- 1) p ( z,s) + �( 1- z)
+ A (ZN - z N+l)p (s)
dZ
N
or
*
*
aP ( z,s ) = *
P ( z,s) f ( z,s) - g ( z) - h ( z) PN (s),
dZ
where
f( z,s) s
A
+ ;
� ( 1- z)
_
g { z) = 1/�( 1- z)
A N
h (z ) = - z
�
( 42)
25
Rather than solve Equation ( 42) and then obtain the inverse Laplace
.
*
transform,re 1{p ( z ,s)} = P( z ,t), one observe that from Equation ( 42)
at z = 0:
n-l
(n-1)!
= L
n-1-i)! i!
(
i=1
1
(n-1)!
· (n-1-i)! P * .(s)
.!.:2..
n_1_1
\1
l
= (n-1)!
\1
n-1
�\ .L.: P�-1-i(s)
1 1
=1
1
*
+ s+A. p (s)
\1 n-1
But
n-1
n-2
*
.E pn-1-i(s) = L.:
j=O
i=1
p
*
j( s)
'
and from Equation (37)
p
*
anP(O,s)
= n!Pn(s)
n
az
( 4 4)
therefore,
.?__ I:
= (n- 1 ) !
\1
or
*
n!P cs )
n
=
fs+ �c\
\ l\ 1
2
p �(s)
j=O J
+
(:,+1\ >\
\ j
p
n -2 *
*
(s) + �1\
L P.(s)
n-1
j=O J
!)
�-1(s)
1
- 1_\
I
( 45)
26
The Laplace-Transform of
(45) becomes
*
N
� Pn(t) = 1; thus
n=O
*
nP (s) = - p - 1(s)
n
�
(S+A.)
+ -S [
�
1
s
-
N
�
J=O
*
1
P j(s) = sand
1
-
-
-
�
(46)
which upon solving for P _ (s) is the desired recursive formula:
n 1
<_1(s) =
{- {("n + s) <(s) + s
f
j=n+l
r;(s)}
.
( 47)
Let n=N, then
(48)
*
but P (s) was determined previously (see Equation (15)); that is, it was
N
found that
therefore;
=("N:s�
( 4 9)
For n-N-1,
27
Continuing this process one obtains next
(51)
and finally
*
Po(s)
=
R (s)
1
sT (s) '
N
where R , R 2, ..., R are polynomials in s.
1
N
N-l
!t
(J1N+s).
{5 2)
For example, R N(s)
*
Applying the inverse Laplace-Transform to P (s), n
n
=
=
0, 1, 2, ...,
N, yields the desired probability mass function P (t).
n
3.5
SOLUTION BY THE METHOD O F THE WEIGHTED PROBABILITY
GENERATING FUNCTION
To obtain the probability mass function P (t); the solution to the
n
system of first order linear differential equations, known techniques
were app1 i ed as described in the previous three sections.
cases, Equations ( 4),(5) and
In a11
(6) were used directly. The method to be
developed here leads to an expression of the Ph(t)•s in terms of the
moments Mk(t) of X(t), the random variable for the number of parked
cars.
Let
M[ff
=
[M (t), M2 (t), ..., MN (t)] be a column vector, each ele­
1
ment being a moment Mk(t) of X(t).
Then
(53)
28
where
E
=
1
is a Vandermonde
(3)
N
3
•
•
•
•
N
N
type matrix, with distinct matrix elements, except
[P1(t), P (t), ... ,
1
PN(t)] is the probability mass function of the number of park ed cars.
for the first column, E is non-singular, and
PTtT
=
The solution PTfT can now easily be written down, namely,
(54)
The moments are already determined in Section 2.3; therefore, it only
remains to invert the matrix E.
Rather than obtain the inverse of the
matrix E, a simpler method of solving the same matrix Equation (53) is
derived next.
Since X(t) assumes only integral values n
=
0, 1, ..., N a weighted
probability generating function is used in developing the method.
Let
(55)
be the generating function of the first moment M1(t)
random variable X(t).
Note that for z
=
0, M(O,t)
=
=
M(l,t) of the
0.
Since M(z,t)
can also be expanded in a Taylor series about z = 1, it will be shown
n
that by equating the n-th partial derivative a M(z,t) of both expresn
.
az
sions evaluated at z = 0, one obtains the probability function P (t).
n
29
Differentiating Equation (55) with respect to
z
results in:
which at z = 0 reduce to
(57)
Differentiating Equation (56) again with respect to z results in:
which at z = 0 gives
(59)
In general, one obtains:
n
a M(� ,t) = n2 (n-l)(n-2) .. (2)(l)P (t) (n+l)2 (n)(n-1) . . .(2)P
(t)z+
n+l
n
az
2
N-n
. . . + N (N-l)(N-2) . . . N-n + l) P N(t)z
=
N-n
k
(n+k )1
( n+k) --rf
· P k(t) z ,
L:
n+
k
=O
(60)
and for z = 0
(61)
Similarly, for n = N
(62)
30
Not let M(z,t) be expressed as a Taylor series about z = 1; that is:
M(z,t) =
j aj
N
M (1 �t)
L (��1)
j=O
3ZJ
J.
(63)
j
a M(l, t)
is the j-th order partial of the weighted probability
j
3Z
generating function M(z,t) with respect to z evaluated as z = 1.
where
For z
=
0,
(64)
M(O,t)
Differentiating the Taylor expression
(63) with respect to z results in:
N-1 N
2
3 M(l, t)
aM(z,t) = aM(l,t) +
- 1) 3 M(l,t) + ... + (z-l)
(z
()Z
(N-1)!
N
ClZ
32
322
'(65)
which upon substitution of z = 0 yields:
j
N 1
3M(O,t)
- (-l)
= I:
.,
ClZ
J.
j=O
Differentiating Equation
Cl
j+l
M(l,t)
.+
3Z J 1
(66)
(65) again results in:
which for z = 0 gives:
j
a2 M(O,t) = � (-l)
L....
. I
j=O J.
3Z 2
Continuing in the same manner one obtains:
3
j+2
M(l,t)
"+
3Z J 2
(67)
31
n
n l
n
a M(z,t)
a M(l,t)
z-l a + M(l,t)
(
=
)
+
+
n
n
n
az
az
az +1
j an+j M(1,t)
N-n
(z)
= L
n
.
.l
az +J
J.
j =O
•
•
•
N-n aN
(z-l)
M(l,t)
+ . {N-n) 1
az N
(68)
Substituting z = 0 in Equ ation (69) results in
j
n
n
a M(O , t) = � (-l)
Ll
'I
n
j=O J.
az
j
an+ M(l,t)
'
n
az +J
(69)
For n = N results in:
N
N
aN M(O,t)
a M(l, t)
a M(z,t)
=
=
N
N
az N
az
az
(70)
From Equations (57) and (66):
(71)
Simil arl y, combining Equ ations (59) and (67) results in:
j
+j
l M( l, t)
2 .
az +J
(72)
n+j
M(l,t). n
(-1 ) a
' - 1, 2, .. ., N
-rrn j
az +
(73)
�
l
(t
p 2 ) 2 (2! ) L..J
j=O
(-l}
I
J'
·
In general,
1
P n(t)- n(n!)
_
N-n
�
·
-
In the Appendix, Section 4.4, it will be shown that
n j
a + M(l, t)
� (-l) i+l an+ ' 2-l Mn++
- �
(t)
n+j
J' 2- i
J+
az
1 =1
_
·
(74)
32
where the a + .+2 . are constants determined in the same Appendix.
n J -1
Hence:
j
n
1
� (-�)
P (t) = n(n!)
n
J. l
�
n = l, 2, ..., N.
= 1
3.6
L
n=l
1
n (n! )
SOLUTION FOR THE CASE OF
�
+1
(t),
(-1)1
a +j+2
M
-i n+j+ 2-i
n
.
(75)
N
E P n(t) = 1; therefore,
n=O
To determine P0 (t), note that
N
Q±j
N
N-n n+j
· i+1
( l)
an+ .+ 2-i Mn+ .+2-i(t)(?6)
·1
L L
J
J
J.
j=O i=l
= 1
In Section 3. 2 , the solution by Laplace-Transform was described.
Utilizing the matrix Equation (14), one obtains for N = 1:
*
(18).
*
To get expressions for P0(s) and P (s), one must evaluate Equation
1
For n = 0, this results in;
and for n = 1
therefore,
33
S+ f.l
sT (s)
1
and
*
A
P l(s) = sT (s)
1
2
where sT (s) = s + s(A.+ f.l) is the determinant of the above matrix and
1
defined by Equation (17).
The determinant sT (s) has as its roots r 0 = 0 and r
= -(A.+ �);
1, 1
1,
1
therefore, from Equation (16) one obtains:
and
r .t
1 '1
)
e
c0 1 1, i
.L
,
'
i =0 [ r . T (r
)]
1 '1 1 1 '1
(r
1
.
.
Substituting the roots r
sion results in:
1, 0
= -(A.+ f.l) in the above expres­
= 0 and r
1, 1
and
which is the desired probability mass function.
Taking the limit of P0 (t) and P (t) as t approaches infinity, the
1
steady state solution P0 and P is obtained; that is,
1
and
P
�
0 - A.+�
-
34
which agrees with the results of MORSE.
The �elution by the Eigenvalue-Eigenvector method was considered
in Section
3.3. For N
Applying Equations
=
1, the matrix Equation (20) becomes
(30), (31), and (32) results in a matrix
-
13
Similarly , the matrix
contains the elements:
l3 =
13,
To obtain the matrix
(
1
1
1
-J.l
one notes that:
and
therefore ,
1
i3
=
K
1
a
K]
1
K
a
- ]J
AK 1
_fl_
=
A. +]J
A
)
/A
A. +]J
.
a:
35
The probability mass function P (t ) can now easily be obtained
n
from Equation
(33); that is:
=
l
L:
n=O
a
n,J
.
s.
1
e
,o
r .t
1,J
n = 0,1 .
For n = 0
and for n = 1
which is the desired result.
In Section
3.4, the method of the probability generating function
was discussed and described.
It was found that for
N
=
1 and Equation
(48)
but
*
P1 ( s )
A
= �---
s
2 +
s ( .A.+ ll )
therefore ,
The solution to these Laplace-Transforms has already been discussed
previously.
36
Finally, in Section
3.5, the solution by the method of a weighted
For N
probability generating function was described.
=
1, Equation
( 75) becomes
j
1-n
(-l) r!:!:J
1
i+l
(t) '
a +j +2
(-l)
M
P (t)-- (n!)
!
�
j
n
-i n+j+2-i
n
n
j O
i
�
Letting n
=
1, P (t) becomes
1
Since n+j
=
1, therefore k
that a 2
1; thus P1(t)
=
therefore
=
=
1 and from Equation
M 2(t).
But for N
where M (t) was determined in Section 2.3.
1
section, it was found that for the roots r
the coefficients:
and
-A
respectively;
therefore,
=
(97) one concludes
1, M 2 (t)
=
M1(t);
At the beginning of this
1, O
=
0, and r1, l
=
+
�(A �)
37
but from Equation (76),
which is the desired result for the probability mass function P ( t) .
n
4.
4. 1
APPENDIX
THE STAY-TIME AND ARRIVAL DISTRIBUTION FUNCTIONS
The stay-time distribution of a car plays a central role in the
formulation of the parking lot model.
To determine the statistical
nature of the stay-time distribution , a large random sample of recorded
stay-time histories is required.
Using this data, a sequence of re­
corded times in order of increasing length can be constructed.
By plot­
ting the number of stay-times that are shorter than a given time,
divided by the total number of cases in the sample, a curve for the
probability distribution function F0 ( t ) that the stay-time for this
class will be less than a certain time is obtained.
If the curve fol-
lows the complement of a negative exponential function, then
(77)
where
1/� is the mean stay-time and� denotes the stay-time random
variable.
The probability density .function of� is obtained by dif­
ferentiating F0 ( t ) , thus:
(78)
and the probability distribution function F0 ( t ) is given by:
(79)
which gives the· probability that the stay-time of a car wi 11 be greater
than t units of time after its arrival.
38
From
(78) the mean stay-time,
39
T
]..l
is obtained; integrating by parts gives:
T
]..l
=
oo
I0 tdF0(t) I0oo F0(t) dt
=
1
=
-
]..l
(80)
.
By a similar argument , one obtains the distribution function Q0(t) ,
where
(81)
or
(82)
and
�
is the random variable denoting the time between successive car
arrivals.
The probability density function of
=
where
A
is the mean arrival rate.
�
is given by:
q (t) '
(83)
The mean time between successive
arrivals is; after integrating by parts:
brQ0(t)dt
=
,
;;:-
.
(84)
40
4.2
THE PROBABILITY MASS FUNCTIO N O F THE NUMBER OF CAR ARRIVALS
Now that the arrival
of a single car has been characterized , it
is possible to determine the probability that any number of cars ar­
rive within a period of time of length t.
time and x any time less than t.
Let t be an arbitrary chosen
Then the chance that, starting at
time 0 , the first car to appear arrives in the interval [x.x + dx] is
q(x) dx
=
At-AXdx.
The chance that no car arrives in [t- x] is Q0(t-x) .
Hence , the probability Q (t), that e'xactly one car arrived within an
1
interval of length t is the product of the two probability integrated
over all possible x satisfying x
�
t.
Therefore,
(85)
Since the chance that one car arrived in [t-x] is Q 1(t-x), the prob­
ability that exactly two cars arrived in this interval [O, t] is given,
in a similar manner, by the convolution:
Continuing in the same manner, by recursion, one obtains
(87)
which is the Poisson distribution function with parameter time .
41
4.3 PROPERTIES O F THE TRANSFORMED TRANSITION PROBABILITY MATRIX
In S�ction
sT (s)
n
=
4.3, it is assumed that the N-th degree polynomial
IS I had (N+l) real and distinct roots; permitting,therefore ,
N
to write down the solution in the form of Equation (16). To prove that
the determinant IS I has (N+l) distinct real roots, it will be shown
N
first that the recursive relationship T (s) = T _ (s) (s+A+N�) N
N 1
(N- l)A� T N_ 2 (s) holds.
THEOREM 1: Let Sk be a (k+l)i(k+l) tridiagonal matrix of the fum
0
0
-�
s+A
-A
0
(s+A+�)
-A
- 2�
(S+A+2�)
0
0
-3�
0
0
S+A+(k-1)�
-A
0
-k�
s+k�
then ! Ski is sTk {s) = s {Tk_ (s) (s+A+k�)- (k-l)A� Tk_2(s)}, for
1
k = 1, 2 , 3, .. . with r (s) = 1 and T (s) = s+A+�.
0
1
Proof:
(88)
First transform the matrix Sk by performing the following
sequence of operations:
(i) Replace the last column of Sk by the dif­
ference of the last two columns .
(ii) Replace the newly obtained second
to last row by the sum of the last two rows of.the transformed matrix.
(iii) Replace the third to last row by the sum of itself and the newly
obtained second to last row.
By continuing this process of replacing
the i-th to last row by the sum of itself and the newly obtained (i+l)
-st to last row , a matrix D is obtained which is of the form
k
42
Dk
s .
s
0
-A.
S+]l
s
0
0
0
-A.
s+2Jl
s 0
0
0
=
0
-(k-l)J.l
s+HkJ.l
0
(89)
Since the row and column operations do not change the value of its
determinant, on expanding det (Dk) according to elements of its last
row:
but
and
Hence for k
=
N
which is the desired result.
To solve the system of linear-differential equations by the
Laplace-transform, it was assumed that the polynomial sTN(s) has (N+l)
r
real distinct roots rN,N ' rN
, ., rN O ' t�e first sub­
, .
,
, N-l . . , N, j . .
}
{
script referring to the degree of the polynomial and the second sub-
script j being the j-th root that satisfies sTN(s)
=
0.
To show that
43
the polynomial sT (s) has real distinct roots, one needs the following
N
theorem. -
Let S be an (k+l) x (k+l) tridiagonal transition mat­
k
rix of the form (88); then the polynomial sTk(s) = 0 has exactly (k+l)
THEOREM
2:
distinct real roots
each fixed k, r
. _
k ,J 1
{rk,k
>
}
,
r
r
,k-l . . . , k, j '''', k, o such that for
rk . ; j = 1, 2, 3 , . . . , k.
,J
'
rk
Consider
The.root 0 will obviously be assigned to rk
, O'
the equation T (s) = 0 only. The proof of Theorem 2 will be by induc­
k
Proof:
tion on k.
Suppose k;l; then by Theorem 1,
+ ), then
Thus, if r1
,l = -(A �
T1 (r1 1) = 0, r1 1 is obviously real, and r 1 1
,
,
,
k = 2 , then
For s =
-�.
<
= 0.
r1
,O
Let
(remembering that T0(s) = 1) one obtains
Therefore, since T2 (r1 1) = -A�, then there exists a real root r2, 1
,
satisfying T2 (r2 1) = 0, and r1 1
,
,
<
r2 1
,
< -�.
Since the second de­
gree polynomial T2 (s) has a leading coefficient of one, it follows that
lim T2 (s) > 0. But T2 (r1 ) < 0 and thus there exists a root r2 2 of
,
,l
s -+ 00
T (s) such that
r2, 2 < r1, 1. Thus, the roots {r2 2, r2, 1,� fall
<
,
2
into the interior of the intervals
-oo
44
Suppose that for p
1,
=
2, ..., k
has p distinct real roots satisfying the following inequalities:
-oo
<
<
r
r
p,p
p- 1 ' 2
r p 1,p-1
<
<
r
p, 2
from which it follows that rk k
,
Consider then
<
<
r
<
r
p, p-l <
rk k -l
,
<
r
p, 1
<
p-1 ' 1
.
.
.
<
<
-
11
(92)
rk,l"
for any root rk j of Tk(s) , it follows that
,
(93)
(92), Tk_1(s) has exactly one root between rk . and rk .+l and thus
,J
,J
Tk_ (rk ) and T k_ (rk j+l) have opposite signs and from (93) Tk+l(rk,j)
,j
,
1
1
By
must have opposite signs.
Therefore, Tk+l(s) and Tk+l(rk, j+l) must have
Therefore, Tk+l(s) has at least one real root rk+l, j
such that rk ,j+l < rk+ j < rk j" Thus, at least k-1 distinct real
l,
,
k+l
roots exist. For s = -11, T k+l(-11) = A
. By (92) and (93),
opposite signs .
Tk+l(rk l)
,
<
T k+l(rk+l, l)
lim
S
._.
- oo
0.
= 0
Tk+l (s)
Hence there exists a real root rk+l l for which
,
k
o
dd, Tk+l(rk k) < 0 and
and rk, l < rk+l ,l < -11. For
,
> 0.
Hence there exists a real root rk+l k +l for which
,
45
T + (r +
k l k l,k+l) =
Q
and-� < rk+l, k+l < rk, k"
For k even, a similar
arguments leads to the same conclusion.
Thus (k+l) distinct real roots exist.
Since the polynomial is of
(k+l)-st degree there exists exactly (k+l) roots .
complete.
The proof is thus
46
4.4 THE PARTIAL DERIVATIVE OF THE WEIGHTED PROBABILITY GENERATING
FUNCTION
In Section 3.5, a solution by the method of a weighted probability
generating function was derived.
The probability mass function Pn(t)
of the number of cars parked at time t was expressed in terms of its
moments Mk(t).
It was shown that:
= N-n ( -l)j an+jM( l, t)
L: .,
j=O J.
az
n+j
We now show that
n+j
"
i+l
LJ (-l)
an+J'+2-i Mn+J'+2-i(t) ·
i=O
To show that the above relationship holds, one must show first that:
N
2
= n~ n (n-1) (n-2) ... (n-k+l) Pn(t).
From Equation (56) one obtains for z
=
N
aM(l,t) = L: n2 P (t) = M (t),
az
n=l
n
2
and from Equation (58) one obtains for z
=
1
2
a M(l ,t) = 22(1) p (t) + 32(2) P3(t) +
az
2
(94)
47
In general, one can see that from
k
a M(l,t)
azk
=
1
2
(1-l)(l-2)
(60)
that:
2
(1-k+1) P (t) + 2 (2-1)(2-2)(2-3)
1
(2-k+l) P (t) + ... k
2
+ N
+
=
�
N
n
2
2
2
(k-l)(k-2) ... (1) Pk (t)
(N-l)(N-2) ... (N-k+l) P (t)
N
(95)
(n-1)(n-2) ... (n-k+l) P (t) .
n
L et
p{n)
then p( n)
=
=
(96)
(n- l)(n-2) ... (n-k+l)
k -2
k-1
k-3
k-1
- u _ n
u ,
n
u
+ u _ n
. .. (-1)
3
k-l
k 2
0
k
where
u
k-1
u _
k 2
u _
k 3
=
=
=
1
1 + 2 + 3 + ... + (k-1)
1
·
2 + l
·
3 + ... 1
(k-1) + 2
·
3
+
+ ... (k-2) . (k-3) + ... + (k-2) (k-l)
uo
=
1
. 2 . 3 ... (k-1) .
2
Thus, when multiplying p(n) by n one obtains
2 .
4
+ 2 . (k-1)
48
The subscript of each coefficient is adjusted to reflect the
power of
n,
so that
=
Substituting Equation
�
£..J
i= l
k+2-i
i+l
n
a
(-l)
k+2-i
(98)
i n Equation
(95),
(98)
resu lts in:
(99)
=
k
i+l
L (-1)
i=l
a
k+2-i
�
N
L
n=l
k +2_ .
n
,
i+l
M
= LJ (-l)
ak+2-i k+2-i(t)
i= l
P n (t)
·
Hence, if k = n+j
=
n+j
?:
1=
1
i+1
M
(t)
(-l) . a
n+J"+2-i. n+J"+2-i
which concludes the derivation.
c
(1 00)
49
5.
(1)
BIBLIOGRA PHY
BIRKHOFF G. , and t�cLANE S.
(1953).
A Survey of ��odern Algebra.
MACMIL LAN, NEW YORK.
(2)
F E L L ER W.
(1958) .
Applications.
(3)
An Introduction to Probability Theory and its
WIL EY , N EW YORK.
(1958) .
MORS E P.t�.
Queues, Inventory and Maintenance, WIL EY.
N EW YORK.
(4)
TAKACS L.
(1962) .
Introduction to the Theory of Queues.
OXFORD
UNIVERSITY PR ESS, N EW YORK.
(5)
THOMSON W.T.
(1950) .
ing Applications.
Laplace Transformation Theory and Engineer­
PR ENTIC E-HALL, N EW YORK.