INSTITUTE OF STATiSTICS
BOX 5457
STATE COLLEGE STATION
RALEIGH. NORTH CAROLINA
UNIVERSITY OF NORTH CAROLINA
Department of Statistics
Chapel Hill, N. C.
A STUDY OF SOME SINGLE-COUNTER QUEUEING PROCESSES
"
by
!v1eckinley Scott
May
_8
1964
research was supported by the Office of Naval
).~earch under contract No. Nonr-855(09) for research
in probability and statistics at the University of
North Carolina, C~pel Hill, N. C. Reproduction in
wlmle or in part is permitted for any purpose of the
United States Government.
Institute of Statistics
Mimeo Series No. 389
ACKNOWLEDGMENTS
I wish to thank Professors P. Naor and W. L. Smith for suggesting the problems dealt with in this thesis and for their
helpfUl guidance and encouragement.
I wish to thank Dr. M. R. Leadbetter for his many valuable
suggestions and comments and for his careful examination of the
manuscript.
I
am also indebted to Professors N. L. Johnson,
W. J. Hall and S. N. Roy for reading the manuscript and for
making helpful comments.
"
I wish to thank the Government of India and the Office of
Naval Research for financial support.
I am most grateful to Mrs. Doris Gardner and also to Mrs. Linda
Burgess for their quick and accurate typing of the manuscript.
Finally, I appreciate all the help of various forms given by
Miss Martha Jordan.
ii
TABLE OF CONTENTS
CHAPTER
~
II
I
PAGE
ACKNOWLEDGMENTS
ii
INTRODUCTION
Vi
A SINGLE SERVER QUEUE WITH CONTROLLABLE ARRIVAL
RATE - A SPECIAL CASE
1
1.1 Introduction
1
1.2 The distribution of the number of units
in the system
.
'.
e
II
1.3 The expected number of units and the expected waiting time
14
1.4 The generating function of the number of
units in the system
17
1.5 The Laplace transform of the busy period
distribution
20
1.6 The Laplace transform of distributions concerning first passage times
26
1.7 The generating function of the Laplace transform With respect to time of the time dependent probabilities
27
1.8 Remarks
29
A SINGLE SERVER QUEUE WITH CONTROLLABLE ARRIVAL
RATE - THE GENERAL CASE
31
2.1 Introduction
31
2.2
•
,-
The distribution of the number of units in
the system
33
2.3 The expected number of units and the expected waiting time
50
2.4 The generating function of the number of
units in the system
52
iii
iv
CHAPTER
PAGE
2.5 The Laplace transform of the busy period
distribution
57
2.6 The Laplace transform of distributions con-
;1
cerning first passage times
2.7 The case when the number of arrival rate is
infinite
2.8 Remarks
III
63
65
66
A SINGLE COUNTER QUEUE WITH CHANGEABLE SERVICE
BATE
67
3.1 Introduction
67
3.2 The distribution of the number of units in
the system
•
3.3 The expected number of units and the ex-
•
3.4 The generating function of the number
pected waiting time
of units in the system
3.5 The Laplace transform of the bUsy lIeriod
distribution
69
81
82
85
3.6 The Lalllace transform of distributions concerning first passage times
3.7 The case when the number of service rate is
infinite
91
92
3.8 Remarks
IV
A SINGLE SERVER QUEUE WITH VARIABLE ARRIVAL BATE
4.1 Introduction
4.2 The distribution problems concerning
inter-arrival times
..
95
4.3 The distribution lIroblems concerning
the number of arrivals
101
4.4 The server's idle fraetion and discussion of the queue equations
109
v
PAGE
BIBLIOGRAPHY
I,
tJ
"
APPENDIX: A NOTE ON COST STRUCTURE AND OPTIMIZATION
113
114
nimODUCTION
This thesis deals with the study of a class of single-counter
queueing processes.
In Chapters I, II, and III we assume that those
in charge of the system are capable of controlling "to some extent"
the arrival and/or service intensity.
The purpose of this is, in
effect, to reduce the mean number of units waiting for service to a
desirable length.
In Chapter IV we assume that the arrival intensity
is neither constant nor controllable but rather a random process of a
Markovian type.
Chapter I constitutes a special case of, and is introductory to,
the more general case dealt with in Chapter II.
Furthermore, we are
able to get explicit forms for most formulas in the special case.
A
brief description of the model for the special case is as follows:
The population of units that demand service from the station from
time to time is classified into two categories, say 0 and 1.
When the
number of units in the system (J, ) reaches a certain prescribed ..
s
number R, the server forbids any category 0 unit from joining the
queue and invites any such unit again for service, if and only if, As
reduces to a size r
~ 0,
r < R.
The exclusion rule does not, however,
affect those who are already waiting in line.
The service time is
assumed to have an exponential distribution With parameter ~ (= reciprocal of the average service time) for all units irrespective of the
categories to which they belong.
vii
The first part of Chapter I deals with the determination of the
stationary probability distribution of £
and the problems arising
s
therefrom, for example, evaluation of Prob [r ~.e s ~ R], certain conditional probabilities, etc.
For the eXistence of steady state (sta-
tionary) conditions we have to impose tQe condition P • Al/~ < 1. It
l
is intuitively obvious that if P ~ 1, the server may not have any
l
idle time and that a divergent queue may ensue (.cf~ Cox and Smitl'1
There is no such condition imposed on Po = A0 /~. The
expected (mean) number of units in the ~ystem (L ) is obtained
s
.
directly from the stationary distribution but for the purpose of cal[2], page 41).
culating higher moments it is easier to deal with the probability
generating function which we give in Section 1.4.
The expected wait-
ing time in the system (Ws ) is obtained from the formula
Ls
.
where ~ is the average arrival intensity ( <:f.
=~
Ws,
Little [4] ).
In Sections 1.5 and 1.6 we are concerned with some distribution
problems.
In the case of the distribution of the busy period we
obtain its Laplace transform and its expected length.· For this, we
consider a modified process which ceases as soon as the number of
units in the system falls to zero (
~f••
Bailey [1]).
Similar is the
case of distribution of time the $ystem spends with A (orAl) as the
O
arrival intensity before it changes to A (or A ) for the first time.
l
O
Using the same mathematical technique as the above mentioned distribution problems, we also obtain the generating function of the
Laplace transform with respect to time, of the time-dependent probability of the number of units in the .system.
It appears that it is
viii
very difficult to obtain the inverse transform.
In Chapter II we also assume that the service time is exponentially distributed with parameter
~
but the population of units
demanding service is comprised of (N + 1) categories, say', "
0, 1, 2, •••• , N.
Consequently, the arrival intensity A assumes
(N + 1) values A ' A , ... , AN and associated with these values of A,
o 1
we have N pairs of integers (r , H ), (r , R ), ~ ••• , (rN, ~) satis1 1
2 2
fying r l ~
(i
= 1,
2,
0
u,
,
r i < Hi (i
N - 1).
=1,2, ••• ,
N) ; r i < r i + l' R1 < Ri + 1
When.t s reaches a size Hi + 1 (i = 0, 1, ...,N-l),
the se;-ver forbids any unit belonging to categories 0, l, ••• ,i from
jointng the queue and any category i unit is invited again for serVice,
if and only if, .£S reduces to a size r i + 1.
The exclusion rule does
not, however, affect those who are already waiting in line.
The problems we deal with in Chapter II are exactly the same as
those of Chapter I, except with (N + 1) categories instead of only 2.
We are able to obtain explicit expressions for both the stationary
distribution and the probability generating function of £. For the
s
busy period distributi6n~e simply show how to obtain its Laplace
transform from a set of (2N + 1) equations involving (2N + 1) unknowns.
The distribution problems concerning first passage times for the parameters A and AN are exactly the same as those concerning the parao
meters A and A respectively of Chapter I. For the parameter Ai
o
l
where i ~ 0, N, the problems is solved by considering a set of difference equations satisfied by f
(t)I S , where f (t) is the probabilm
m
ity density function (p.d.f.) of the required distribution starting
ix
initially from a position m.
tion 2.6.
This is discussed in detail in Sec-
We have not given the expression for the generating func-
tion of the Laplace transform with respect to time of time-dependent
probabilities, though we are able to do so as in Chapter I, since we
are unable to obtain its inverse transform.
Lastly, in Chapter II mention must be made that the eXistence of
V
stationary conditions is achieved only when PN =
Il < 1. There are
no restrictions whatsoever on the values of p. = A.IIl (i=0,1, ••• ,N-1).
~
~
Also when N is infinite we will still have a stationary distribution
provided a certain series converges (See Section
2~7).
A study of the above queueing process suggests another problem
which is mathematically analogous to the problems discussed before
but whose physical interpretation is qUite different.
sidered in Chapter III.
This is con-
It is the case where the arrival intensity
is constant but the service rate Il changes according to a rule very
similar to the change of
'A.
in Chapters I and II.
The (N + 1) values
of Il are denoted by Ilo ' Ill' ~ •• , Il and, associated with these values
N
of Il, are Npairs of integers (r , R ), (r , R), ... , (rN'~) satl
1
2
isfying the same requirements as in Chapter II. The problems we deal
With in Chapter III are exactly the same
as in Chapters I and II and
the corresponding results are very similar in form.
'lhus, for example,
the probability of the server being idle in the case when N
given by the reciprocal of
1
'!!""l...;;;..;.....p- - (R .. r)
o
p:
;9
R+r
,
=1
is
x
in the case where f... is changing and IJ. fixed (Chapter I), and,
by
the reciprocal of
,
-=--__
1-:1 - b
bo - b 1
_ (R _ r)
o
,
1 - bl
btl
o
= f.../lJ.i (i = 0, 1), in the case where IJ. is changing and f...
i
fixed (Chapter III).
where b
In Chapter III we have not considered the above special case
(N = 1), but all appropriate formulas can be obtained from the general
case by proper identification of the parameters.
In the special case,
it may be noted that if we put IJ. = 0, r = 0 we have what is known as
o
the (O/R) doctrine introduced by Yadin and Naor
[7].
The title of Chapter III is "A single-counter queue with changeable service rate. 1I
The reason why we call it a single-counter
instead of a singel-server is because, in effect, we have only one
counter but we bring in or drop out helpers or auxiliary servers
depending upon whether the number of units in the System increases
or decreases Within a certain specified range.
In Chapter IV we consider a queueing process under theassump-
tions that the service time has an exponential distribution with
parameter IJ. and the arrival intensity f...·is neither constant nor controllable I but rather a random process of a MarkoVian type.
We have
considered a very special case where f... assumes only two values f...
o
and f... , though in principle the methods developed are general. We
l
assume that the probability of arrival in a small interval of time
xi
of length 8t is either "'05t or "'15t.
,
If at any instant of time the
arrival rate is '" , then in the next interval 5t the probability that
o
'" will change to "'1 is Ie 5t. Similarly when the arrival rate is "'1'
o
0
'
the probability that it will change to '" in the next interval 5t is
o
K1
5t· ., . "
' .. ;'".
......
A considerable part of Chapter IV is devoted to problems arising
from the arrival process described above.
Specifically, these prob-
lems are (i) the distribution of the inter-arrival time, (ii) the
joint distribution of two successive inter-arrival times, (iii) the
probability generating function of the number of arrivals in an interval of length t, and (iv) the joint probability generating function
of the
n~ber
of arrivals in two successive non-overlapping intervals
of lengths t l and t 2 • As expected, when we put "'0 = "'1 (= '" say), all
results reduce to those of Poisson arrivals with parameter "'. A
result which is intuitively obvious is that the mean number of
arrivals in an interval of length t is given by (1ro "'0+ "'1"'1) t,
where "'.J. is the proportion of time in which the arrival rate is
"'i
(i= 0,1).
As far as the queueing part is concerned, we can determine only
the fraction of time in which the server is idle or busy.
We are able
to write down the queueing equations completely; but the solution, at
present, seems intractable.
The probability of the server being idle
is obtained by the method of
~enerating
This result is intuitively obVious.
function, and this is
We are not able to determine
Xii
p(o,~o) aQd.~(O,Al) .sepa~ately,
where p(O, ~i) is the probability
that the server is idle and that the arrival rate is
,
~:1.
If, with the same arrival process, we consider the queueing process with a limited waiting room, of size N say, it is possible to
obtain an explicit expression for the stationary distribution for
small values of N.
For large values of N, the solution can be
expressed in a determinantal form.
,
CHAFTER I
A SINGLE-SERVER QUEUE WITH CONTROLLABLE
ARRIVAL RATE - A SPECIAL CASE
1.1 Introduction.
In this chapter we are considering a single-server
queueing process under the following assumptions:(a)
The service time is exponentially distributed with mean
duration· 11 fJ.
(b)
The arrival process is what may be called "controllable
Poisson arrival process with two arrival
ra~~sH.
Meaning of "controllable Poisson arrival process with two arrival rates":
In any queueing process we have a cycle consisting of two phases called
the idle and busy periods respectively.
Suppose at some instant of time
the system is idle, that is, there are no units (customers) in the system.
Units then start coming in a Poisson stream with intensity
=
Ao'
The arrival rate A is equal to
A a s long as the number of units
o
in the system (t) is less than some prescribed positive integer R.
s
As soon as t
reaches a value R, the arrival rate A changes from Ao
s
to
~
instantaneously and it continues to be
~
t s reaches the value r, A changes back from
ts
as long as
is strictly greater than some other prescribed integer r,
When
A
~
(r
to
~ 0,
r <
"\, and
the same process is repeated; that is, the arrival rate will continue
to be
Ao until
t s takes a value Ragain.
R).
2
With the above assumptions, we shall determine the stationary
probability distribution of the number of units in the system (and
also give the expression for its generating function) and its ex-
t
pected value, the expected waiting time in the system, the Laplace
transform of the distribution of the busy period and its expected
value, etc.
In the case of the number of units in the system we
shall also give an expression for the generating function of the
Laplace transform with respect to time of the time dependent probabilities.
1.2 The distribution of
Let
(n, \.)
£ , the number of units in the system:
s
denote the state that
rate of arrival where
n = 0, 1, 2, ••• and
£ =n
s
and
i = 0, 1.
'\
By
is the
virtue of
(nJ A ) for n > R and the
o
are inadmissible. The diagram associated
our assumptions, clearly, the states
states
(n,
~)
for
n < r
with the process is the following:-
()
1
"1:"
•
J
It t--
-) -\ I ) / /J.. <;-.
Diagram 1.1
3
Let
state
(n
p(t, n, A.)
~
J
~)
probability.
1
denote the probability, at time t, of the
~)
and denote by P(n,
By employing the usual
the corresponding stationary
tot' technique, we have the
following differential-difference equations for the process:
(1.2.1)
p'(t, 0, \) = - Ao p(t, 0, Ao ) + ~(t,l, Ao )
(1. 2. 2)
pt(t, n, Ao ) = -(~+ \)P(t, n, \)+ ~(t, n+l, Ao )
+ A0
p(t, n-l, 0
A) (n = .
1,2, ••• r-l)
p'(t, r, A)
= -(~+ A )p(t, r, A) +
0 0 0
(1.2.4)
pt(t, n, Ao ) =
-(~+
~(t,
r+l, A)
0
+ AoP(t, r-l, Ao ) + ~(t, r+l,
~)P(t, n, A ) + ~(t, n+l, A )
o
o
lJ)
+ Aop(t, n-l, A0 )(n=r+l,r+2, ••• ,R-2)
(1. 2.5)
(1.2.6)
pt(t, r+l, ~)
p'(t, n, ~)
=
= -(~
~)P(t, r+l, ~) +
~(t, r+2, ~)
-(~+ ~)P(t, n, ~) + ~ p(t, n+l, ~)
+ ~P(t,n-l, ~)(n=r+2,r+3, ••• ,R-l)
(1.2.8)
p'(t, R, ~) = -(~+ ~)P(t, R,~) + ~(t, R+l, ~)
+ \P(t, n-l, \)
(n ~ R+l)
4
The following difference equations which the stationary proba-
•
bilities satisfy are obtained from the above differential-difference
equations by merely putting p'(t, n,
argument t
from p( t, n, '\)
'1)
for all
= 0 and suppressing the
n
and i.
They can also be
obtained directly from the fact that the intensity of leaving a certain state is equal to the intensity of coming into the state
(cf. Morse [5], page 16).
(1. 2.10)
IlP(l, " 0
) - 0
A p(o,
A)
0
(1.2.11)
I.l P(n+2, Ao ) -(11 + ,,)P(O+l,
A)
+ "P(n, A ) = 0
0
000
= 0
(n
= 0,1,2, ••• ,r-2)
(1. 2.12)
Il P(r+l, Al ) +I.lP(r+l, "0) ~ CI.l+A0
)P(r,A)
+ A P(r-l, ")=0
00
0
(1.2.13)
Il P(n+2,,,o ) - (Il+ " 0 )P(n+l, A0
) +0
A P(n,A
) = 0
0
(n = r, r+l, ••• , R-3)
(1.2.14)
- CIl+ Ao)P(R-l,,,o) + AoP(R-2, A 0)
(1. 2.15)
Il P(r+2, \) - CI.l+ A.i)P(r+l, Al ) = 0
(1.2.16)
Il P(n+2,
".J) -
CI.l+ Al)P(n+l, \ ) + "lP(n, Al )
(n
= r+l,
=0
=0
r+2, ••• , R-2)
(1.2.17)
1J.P(R-+1.~ 'i)""{ j.:+ f)}P{i\., ~}+A1P(R-l, Al ) + "oP(R.l, Ao ) = 0
(1.2.18)
Il P(n+2, Al ) - Cv.+ "J)P(n+l, A1 ) + "lP(n, A1 )
(n ~ R)
=0
5
We shall solve the above difference equations and express all
P(n, ~)I S
::lin terms of
determined lateM
,,.
f~om
p(O, \,)
,where
p(o, ~)
w:Ul be
the normalizing equation.
Notice that (1.2.11) is a bomogeneous difference equation of the
second order with (1.2.10) giving the intial condition.
assume that
AO
~ ~;
then the solution to (1.2.10) and (1.2.11) is
P(n, A)
(1.2.19)
We shall first
=
pn p(O, A )
000
("n
= 0,1,2, •• :
r' and
Similarly the solution to (1.2.13) with (1.2.14) as the initial condition is given by
(1.2.20)
(n = r, r+l,
... ,
R-l) •
we can write equation (1.2.12) as
P(r+l, A_)
"1
=
(1 + po) P(r, A)
- P0 P(r-l, A)
- P(r+l, A)
0
0
0
and substituting the values of P(r, A), P(r-l, A)
and P(r+l, A)
0 0 0
from (1.2.13) and (1.2.14) we have the relation
(1. 2. 21)
P(r+l,
A.J) =
(1 - p ) p( 0, A)
o
0
r
R
Po - P0
Again from (1. 2.19) we have
pR+r-l
(1.2.22)
P(R-l, A)
o
=
~
Po -
R (1 - po)P(O, AO)
Po
•
6
Thus we have
(1.2.23)
The case where
\
= f.L or
=1
Po
is obtained by solving the
,,,
e~uations
as
P
directly or by taking the limits of (1.2.19) and (1.2.20)
tends to 1.
o
The following are the solutions
\)
(1. 2. 24)
Pen, AO )
=
(1.2.25)
Pen, AO )
= ~
R-r
(1. 2. 26)
P(r+l, ~.> = P(R-l, ~)
pee,
In solving the remaining
p(o
'
(n = 0, 1,
A}
...
r) ,
(n = r, r+l, ••• ,R-l)
O
pee,
=
e~uations
and
\)
(R-r)
we shall assume that
Pl = ~./ f.L < 1. The fact that we need the restriction P:L < 1
will become obvious later when we impose the normalizing condition on
the probabilities.
The solution to (1.2.16) with (1.2.15) giving the initial condition is
P(r+l, ~)
. (1 ~ P )
(1. 2. 27)
l
(n
where
P(r+l, ~)
value of
Next,
n-r
(1 - Pl
= r+l,
)
r+2, ••• > R)
is given by (1.2.21) or (1.2.26) depending upon the
p.
o
e~uation
P(R+l, \)
= (1
(1.2.17) can be written as
+ P ) peR, A) - P P(R-l, ,,) - P P(R-l, ,,)
1
11·
100
= O.
7
Making use of (1.2.27) and (1.2.23) or (1.2.26) depending upon the
eft.
value of
Po' and after simplification we obtain
(1.1.28)
P(r+l, A )
A.1)
P(R+l,
=
PI
1
(1 _ pR- r )
1
(1 - \)
Lastly, the solution to the difference equation (1.2.18) with
(1.1.28) giving the initial condition is
Pen, A)
1
= peR
A) pn-R
'1
1
(n > R)
•
On normalization we obtain
00
(1. 2. 30)
E
n=o
[Pen, '" 0 ) + Pen,
R-l
00
Now
~.
Pen,
n=o
:E
n=o
Pen,
'" )
0
r
=
:E
n=o
and
(1. 2.32)
'" )
0
= 1 •
r
=
E
n=o
Pen, A0 ) +
R-l
E
n=r+l
Po 1= 1, we have
r
(1.2.31)
E
=
Pen, '" 0 )
n=o
.Case (i)
A.1)J
R-l
:E
n=r+l
Pen, "'0)
=
R-l
:E
n=r+l
1
P~ p(o, '"0 ) = p( 0,
'" )
0
- p
r+l
0
1 - p
0
Pen,
'" ).
0
8
P~
=
pea,
1.. 0 )
r
R
r+1
R
Po
- P0
[
(1 -P
Po - Po
o
-
P:]
(R-r-1)
)
Adding (1.2.31) and (1.2.32) and simplifying we obtain
R-l
(1.2.33)
Pen, A)
o
.E
n=o
pR+r
pea, Ao}
=
o
- (R-r)
P
1 - Po
pea,
=
A)
_ _ _0_ _
1- P
(R-r)
r
o
R
pea,
- p>,
A )
o
0
P(r+l, 1.. )
1
...;;;..
1 - Po
o
on using the relation (1.2.21).
Case (ii)
P0
= 1,
we have
r
r
(1. 2.34)
.E Pen,
n=o
"0)
=
.E
n=o
pea, A0 }
= (r+l)P(O, A0 )
and
~1.2.35)
R-1
.E
n=r+l
Pen, 1..0 )
=
=
R-1
.E
n=r+l
-RR-n-r
p(o, A )
o
p(o, A)
R-r 0 [R(R-r-l) -
1
2(R+r)(R-r-l»)
Adding (1.2.34) and (1.2.35) we have when P = 1
o
R-l
(1. 2. 36)
.E Pen, A) = ~ (R+r+l)P(O, 1.. )
0
n=o
0
Note that (1.2.36) can also be obtained by taking the limit of (1.2.33)
as
P
o
tends to 1.
9
co
Next
co
l:
n=o
P(n, "1)
=
R-1
(1.2.37)
n=r+1
co
P(n,
1:
n=R
":1'>
=
A.i)
=
and
P(n, "1)
R-1
P(n,
1:
(1. 2.38)
l:
n=r+1
P(r+1,
1:
'l)
(1 - Pc 1 )
n=r+1
(1
n-r)
- P1
P(R, "1)
=
(1 - P )
1
P(r+l,
'l)
=
(1 - P )
1
provided
Rr
2 (1 - P - )
1
,
< 1.
P
l
co
1: P(n, "J) would not conn=R
That is, we have a stationary distribution if and only if
Note that if P1 ~ 1, the series
verge.
P1 < 1.
This condition is intuitively obvious.
Summing (1.2.37) and (1.2.38) we obtain
co
(1. 2. 39)
1:
n=r+l
P(n,
"J.) =
P(r+1, "1)
(R-r)
(1 - "1)
Substituting (1.2.33) and (1.2.39) into (1.2.30) we have for the
case
Po
!
1
10
p(o, A )
(1. 2. 40)
_ _..;;0;...
_
(R-r)
(1 - PO)
or on using (1.2.21) we obtain
(1.2.41)
p(O, A)
°
1
=
-L.
1 _P
,
pR+r
-(R-r) __o--=~
o
Po ~
1
p r _p R
0
0
and substituting (1.2.36) and (1.2.39) into (1.2.30) we have for the
case
Po
=1
=
(1. 2.42)
1
or on using (1.2.26) we obtain
(1.2.43)
p(O, A)
o
=
1
~ (R+r+1) +
1
1 - P1
It ras been verified that (1.2.43) can also be obtained as a limiting
case of (1.2.41).
Summarizing all the results obtained so far, we have for the distribution of the number of units in the system, subject to the restriction that
P:I. < 1,
11
;.
tP~
P(n, \) =
I""
p(o, /\0)
J
I p(o,
l
\
i
I
1
(n=O,l, •.• r)
/\ )
Po
0
P~ p(o,
P(n, /\0) =
po!
r
Po -
=
1
/\0)
po!
R
Po
1
(n=r,r+1, •.• ,R-1)
I
Po
l
=1
•
(n=r+1, ••• J R) ,
n-r
P1
(n ~ R)
where
-'
;
P(r+1,~)
i
=
1
t
Po
(R-r)
=1
and
, po!
Po - P1
p(o,7\ )
o
1 - P1
=
1
1
'2 (R+r+1)
+
1
1 - P1
1
12
The probability of finding the system where
A.
J.
is the rate
of arrival is given by
=
p( A.)
J.
(i
= 0,
1)
.
From (1.2.33) and (1.2.36) we have
[
p( A )
o
1
_ (R-r)
1 - po
=
Po=1
and (1.2.39) gives
(1. 2.46)
where
p( \)
=
(R-r)
p(O, Ao ) and P(r+l, ~) are given by (1.2.44).
By means of straightforward algebra, it can be shown that the
probability that the number of units in the system (£)
s
tween
lies be-
rand R is given by
(1.2.47) Per -< £ s < R]
=
(p o
= 1)
•
13
Similarly, the expressions for the conditional probabilities that
the arrival rates are
,'"
A and
o
r < £ < Rare,
s -
given that
~
respectively, given by
i
(1.2.48)
PO.,jr S
.es S
R] =
J)
Po
F1
'21 (R-r+l)
/!
I,
,
R-r
(R-r) _!: pn
1
n=l 1
'2 (R-r+l) + - - - - - (R-r)(l - PI)
and
Po
=1
{=:
,
I
\
\
I
R
1
l-p
o
-
(l-p)
R-r n}
0
r R (R-r) (l-P ) +(l-P) E PI
Po-Po
1
1 n=l
-p)
P0
{ 0(p
1
.J.
I
I
R-r
(R-r) -
!:
n=l
I\
\
~
P~
. R-r
(R-r) - ! : p~
(R-r+l) +
n=l
(R-r) (l-iJ l )
p
o
=1
.
,
14
exp~cted
number of units in the system (L s ) and the
corresponding waiting time (W s ): The contribution to Ls from
the states (n, A) (n = 0,1, ••• R-l) is given by
1.3.
The
o
R-1
E
n=o
Case (i)
Po
Now
E
f-
r
n P(n,Ao ) =
n=o
R-1
E n P(n,A6)
n=r+1
1
r
n=o
E n P(n, AO) +
n P(n, Ao ) =
we obtain
r
n
E n p o P(O,A0 ) and after
n=o
:. n P(n,A) = P(O,A)
o
0
n=o
Similarly, it can be shown. that
R-l
R-l
E n P(n,Ao ) = E
n=r+l
n=r+l
= P(O,A)
o
on using (1.2.21),
Po
f-
1.
simplification
Po 2 [(l-l) - r I(l-P)].
000
l-P o
()
15
Case (ii)
=1
p
o
r
I:
(1.3.4)
n P{n, Ao ) =
n=o
and it can be shown that
(1.3.5)
R-l
I:
R-l
n=r+l
n P{n,A ) =
R-n P( O,A )
n -R-r
O
I:
n=r+l
0
P{O,A } [- R
R-r 0
2 ({R-l)R-r{r+l)1
=
-i
[(R-l}R{2R-l)-r{r+l)(2r+l) 1
1
Combining (1.3.4) and (1.3.5) and simplifying we obtain
(1.3.6)
Ls
o
=
122
b {R + Rr + r - l} P{O,AO )
2
b {R-r)(R
+ Rr2
+ r - l)P{r+l'01 )
= 1
on using (1.2.26),
Po = 1 •
The contribution to
from the states (n,~)
Ls
is given by
R-l
00
L
s1
n P{n,Al )
n=r+l
=
I:
=
I:
n=r+l
nP(n,Al ) +
It can be shown that
(1.3.7)
R-l
I:
n P(n,Ij) =
n=r+l
I:
n=r+l
P(r+l,'1.)
= \I
R-l
-Pl)
n
P{r+l, ~)
(1 -Pl)
n-r
(l- Pl )
(n ~ r+l)
16
(1.3.8)
..
to
~
n=R
(
nP
n, f.:L
)
=
=
n P(r+l, ~) (1 _ pR-r)
(
)
n=R
1 -Pl
1
:
L.
P(r+l,~)
(l-p)
1
Rr
(1 - Pl- )
Pl
R
2 ( - - R + 1)
(1-P )
P1
1
and addition of (1.3.7) and (1.3.8) leads to
(1.3.9)
LS1 =
1
P(r+l'Al) [1
(R-P1r)
(1 - Pl) 2(R+r)(R-r-l) + (1 - P )1
Adding (1.3.3) and (1.3.9) we obtain
],
where
pea, AO ) and
P(r+l,~)
are, respectively, given by (1.2.41)
and (1. 2. 21).
Similarly, adding (1.3.6) and (1.3.9) we obtain
+
where
P(r+l,~) =
pea, 'b)
CR-r)
(R- P 1
(1 _~)
and
r)]
2'
=1
P
0
pea, A)
is given by (1.2.43).
o
17
The expected waiting time in the system, by making use of the
...
,
well-known formula of Little [4J, is given by
(1.3.12)
where
W
s
~ (=
"0
=
P{
1
L
s
~
"0)
+
/\lP{ /\1»
is the actual expected arrival rate
under the operation of the management doctrine, and p{ /\)
o
p{ /\1) are, respectively, given by (1.2.45) and (1.2.46).
and
1.4 The generating function of the probabilities of the number of units
in the system:
Define the partial generating functions
R-l
h{z, /\)
(1.4.l)
0
{
E
=
co
h{z, /\1)
zn Pen, /\ )
0
n=o
=
E
n=r+l
Denote the probability of finding
n
z Pen, ~)
n units in the system by
then clearly the generating function of the
P{n)'s
pen);
is
(1.4.2)
With h{z, /\)
o
given by (1.4.l) equations (1.2.l0) through
(1. 2.14) lead to
J::
[h(z,o
/\ ) - o
p{o, /\ )]. - (Il +/\0
)h(z, /\0
) + IlP{O,
/\ )
z
0
which on simplification gives
18
~(l-Z)P(O,A ) + A ZR+lp(R_l,A)_ ~zr+lP(r+l,~)
0
0
'1
0
(1.4.3) h(z, "0 ) = -----~2- .- - - - - - - - - A z
o
Similarly with h(z, AI)
- (~ + A )z + ~
0
given by (1.4.1) considering
e~uations
(1. 2.15) through (1. 2.18) we obtain
~z
Case (i)
Po ~ 1:
r+lP( r+l,A ) - A ZR+lP( R-l,A )
o
0
l
Clearly,
h{z, \) converges for all values of
z.
The denominator on the right hand side of (1.4.3) has two zeros
z = ~ / Ao = 1/ p. 0 and therefore the
numerator must vanish for these two values of z. That is, we have
occurring at
z =1
the following two
and
e~uations
Ao P(R-l, A0 ) - ~P(r+l, AI)
= 0
~(l-l/p o )P(O,A)
+ A (l~ )R+lp{R_l,A ) - ~(l/P )r+lp(r+l,Al)=O •
000
0
0
Solving these
e~uations
we have
oP(R-l, A0 )
R+r
Po
= r
R (l-PO)P(O,A o )
Po - Po
P
As expected (1.4.5) is the same as (1.2.21).
Notice that
h( z, AI) converges for ~ ~ ~ 1
and that the denomina-
tor on the right hand side of (1.4.4) has two zeros occurring at z=l
and z
= 1/ Pl'
As has been remarked before, for the existence of
19
P < 1, that is, 11 Pl > 1,
l
otherwise the server will be unable to deal with the customers as
equilibrium conditions we require
fast as they arrive.
•
We, therefore, disregard the zero
As before the numerator on the right hand side of
vanish at
z
z
= IIp
1.
(1.4.4) must
= 1 and this leads to the relation
= P0 P(R-l,
P(r+l, iL)
'J.
which is the same as
A0 )
(1.4.5) and on using this relation we can
write
h(z,~)
(1.4.6)
where
=
lJ(l-z)P(O, \) + 'bP(R-l,~)( zR+l_zr+l)
l· -
Ao
(IJ.+ /\)u IJ
0
R+r-l
Po
--r--=R:-" (1 - P o)P( 0, /\0)
Po - Po
On using the normalizing condition
1
=
z
lim h(z)
->1
=
lim
z
-> 1
[h(Z,Ao ) + h(Z,~)]
it. has been verified that. P(O,A)
is given by
0
determined the generating function
Case (11)
P o = 1:
When Po
(1.2.41). Thus we have
h(z) completely When
= 1 (1.4.3)
gives
(l-z)P(O,A) + zR+lp(R_l,/\ ) _ zr+lp(r+l'/\l)
h(
z, )
/\
o
· 0 0
= -----------~------------
(1 _ z)2
Po
f
1.
20
Using similar arguments as before we obtain
=
(1.4.8)
=
P(R-l, " o )
P(O,,,O)
(R-r)
and
p(o,,, )
= (l-z)
The expression for
o
h(z,
1
[1 - (R:r)
":1.)
R
Z
n=r+l
remains the same as before, that is,
it is given by (1.4.7) except that in this case
P(R-l,")
is given
o
by (1.4.8).
Again on using the normalizing condition we arrive at
=
1
! (R+r+l) +
which is the same as (1.2.43).
Having determined h(z), we can find out the moments of the distribution of the number of units in the system.
As a check for the
expected number of units in the system we calculated h'(z)/z=l and
got the same expressions as those given in section 1.3.
1.5 The Laplace transform of the distribution of the busy period and
its expected length:
To obtain this we follow Bailey's method for
finding the distribution of the busy period in case of a simple queue
[1].
That is, we consider the modified process which ceases as soon
as the number of units in the system falls to zero.
In other words,
21
the state (O,Ao ) may be considered as an absorbing state with the
condition that at the beginning of the period the system is in
a state
(l}A).
o
With this understanding, we denote by Q(t, n, Ai)
bility of the state
(n, Ai)
at time t.
the proba-
Note that pet, n, Ai)
introduced in section 1.2 has the same meaning as Q(t, n, Ai)
but for a different process.
which the Q(t, n, ~)'s
satisfy are:
QI (t, 0, A )
o
=
~Q(t,
rQ'(t, 1, Ao )
=
-6J+A0
)Q(t,l,A0
) + ~ Q(t,2,A
)
0
=
_(~+A.
(1.5.1)
;
e
The differential-difference equations
ii Q'(t,n,
A )
0
'b
1, Ao )
)Q(t,n,A 0 ) + ~Q(t,n+l,A 0 )+A0 Q(t,n-l,A)
0
}
[n
(1.5.2)
Q'(t,r,lu)
= 2,3, ••• ,r-l]
=
I
I
I
+A oQ(t,r-l,A)
0
I
I
~
Q'(t,n, \ )
= -(~+Ao )Q(t,n,
+~Q(t,r+l,
Al )
A0 ) + ~Q(t,n+l, A0
)+ AQ(t,n-l,A
)
00
, Q'(t,R-l,A o ) = - (ll+A
)Q( t,R-l,r. 0
) + r.0
Q( t,R-2
J A )
~
0
0
22
Q'(t,r+l,A l ) = -(~+Al)Q(t,r+l'Al) + ~Q(t,r+2'Al)
Q'(t,n, Al )
= -(~+~)Q(t,n'Al) + ~Q(t,n+l'A)+ A1Q(t,n-l, Al)
[n = r+2, r+3, ••• ,R-l]
[n
~
R+l] •
Define the partial generating functions
R-l
(1.5.4)
{
q(t, z, Ao ) =
E
n=l
zn Q(t, n,
'b)
CXl
q(t, z, I)
=
E
n=r+l
zn Q(t, n, \)
and because of the initial condition Q(O, n, Ai) = 1 n = 1,
=
° otherwise
we have
{
q(o, z, ~)
=
z
q( 0, z, '\)
=
°
i =
Making use of (1.5.4), the set of equations (1.5.2) leads to
°
23
(1.5.6)
- AoZR+lQ( t,R-l, Ao ) +
~Z r+lQ( t,r+l;~ )
Let us denote the Laplace transform of any function
cp*( s)
c~(t)
•
by
where
~(s) =
I
.-st
~(t)
dt .
Taking the Laplace transform with respect to time of both sides of
(1.5.6) and using (1.5.5) we get
z[s q*(S,Z,Ao )-z]
that is
(1.5.8)
q*(S,Z,A)
o =.
2·
Ao z
•
- (s +lJ.+A)Z
+ lJ.
0
Since
q*(S,Z,A) converges for all values of z provided
o
Re(s) > 0, the zeros of the denominator in (1.5.8) must coincide with
those of the numerator.
pression
A_
'1
Z
2
-
(s + lJ. +A.)Z
+lJ. by
1
(s+lJ.+
f'
\
t
Let us denote the zeros of the quadratic ex-
/3i (s)
a.1 (s) and
/3.1 (s)
~)+[(s+~ +
'1)2- 4lJ.Ai]1/2}
{(s+~+ ~)-[(s+ lJ.+ ~)2
(i
= 0,1)
where
- 4 lJAi]1/2}
24
so that
a. + ~. = ~1 (s + ~ + A.)} a.~. =
~
~
I\i
~
~
~
~.
and in (1.5.9)
I\~
we choose that value of the square root for which the real part is
positive.
Setting the numerator of (1.5.8) equal to zero and sub-
stituting the two zeros
0:
o
and
~
0
(at each of which the numera-
tor must vanish)} we have
~RQ*(S}T_1}
A ) - ~ ~ r Q*(s}r+1}A1 ) = ~
0 0 0
0
•
Similarly considering equations (1.5.3) and on making use of
(1.5.4), (1.5.7) and (1.5.5) we obtain
~z
(1. 5.11)
r+1Q* ( s,r+1)"\ ) - AoZR+1Q* ( s}R-1}
Since
q*(s,z, A ) converges inside and on the unit circle for
1
Re(s) > O} the zeros of the denominator of (1.5.11) inside and on tz 1=1
must coincide with the corresponding zeros of the numerator.
Using
Rouche's theorem} it can be shown that (see Saaty [6], page 89) the
denominator of (1.5.11) has only one zero inside the unit' circle and
this zero is
I z I = 1.
~1
as given by (1.5.9), and there is no
zero on
Setting the numerator of (1.5.11) equal to zero at
z
= ~1
we obtain
(1. 5.12)
Solving for
= Ao ~1R-r Q* ( s,R-1, A0 )
Q*(s,l} A) from equations (1.5.10) and (1.5.12) we
o
obtain after lengthy but simple algebra and noting that O:i ~ i =III Ai
25
( c:l- l
=
o
f3 R-l)
0
----=---=---
A
r-l
+
_
(to )
(c:l -
f3R )
000
(c/-r_ f3 R-r)
o
0
f3R-r ( ex
- f3 )
100
Now the p.d.f. of the length of the busy period fb(t) is clearly
given by
(1. 5.14)
Q'(t, 0, /I. )
o
=
IlQ(t,l, A)
o
•
TaJdng the Laplace transform of both sides of (1.5.14) we have
J.
Recall that ex.~
and
(3.~ (i
= 0,1)
are functions of
s
and are
given by (1.5.9).
Thus we have determined the Laplace transform of the distribution
of the busy period.
On differentiating
ft(s) and setting
s
equal
to zero we obtain the expected length of the busy period given by
26
Pl
1 - P. ]
1
1.6 The Laplace transform of the distribution of time the system
{(n, A.)}
~
spends in the subset of states
before it goes out of the
subset for the first time and its expected value,
(i
= 0,1):
The
distribution of time the system spends in the subset of states
{en, AO )} (n = O,l, ••• R-l)
before it goes out of the subset for the
first time is clearly the same thing as the distribution of time,
in a simple queue, that elapses before the number of units in the
system grows from
° to
Exercise no. l7,,--page 129)
results:
Let
and let
f*(s)
o
f (t)
and we shall simply write down the
denote the p.d.f. of the required distribution
o
denote its Laplace transform, then
(1.6.1)
where
The latter is known (see Saaty [6] ,
R.
=
f*(s)
o
ao and
~
0
are given by (1.5.9).
The expected length of the
period is given by
R
A0 - ~
(1. 6. 2) E (t)
0
=
{
R(R+l)
2~
+
l;t
(A _~)2
[ijJ./AO)R - 1]
)
A0 ~ ~
0
A0
=~ •
27
Similarly, the distribution
fl(t)
of the time the system
[(n, ~)}
spends in the subset of states
(n = r+l, r+2, .•• ) be-
fore it goes out of the subset for the first time is the same thing
as the distribution of time, in a simple queue, that elapses before
the number of units in the system reaches
number of units at time
t = O.
r < R, where R is the
The Laplace transform of fl(t)
is given by
ft ( s ) =
where again
,
131 is given by (1.5.9) and the expected value
(1.6.4)
Note:
R-r
13 1
=
SaatY has also found the inverse transform of ft(S).
1.7 The generating function of the Laplace transform with respect
to time of the time-dependent probabilities of the number of units
in the system:
t
Let us denote by P(t,n) the probability at time
that there are
n units in the system.
Then clearly,
Define the generating functions
R-l
h(t,z,
\,)
=
13
n=O
zn P(t,n, A.)
0
Cl)
(1.7.2)
h(t,z, ~)
=
h( t, z)
=
13
zn P(t,n, AI)
n=r+l
h( t, z, A. 0 ) + h(t,z, P.1'> .
28
We assume that at time
t
=°
there are no units in the sys-
tem and this leads to the initial condition
°
i
=
i
=1
By considering equations (1.2.1) through (1.2.5) and making use
of (1.7.2) we obtain
z oh(t,z, " o )
(1. 7.4)
[A z2 - (ll+")z +ll]h(t,z,,, )+ ll(z-l)P(t,O,,, )
--~-~-=
at
0 0 0
- "ozR+l P( t,R-l, \ ) + II zr+l P( t, r+l, "1 )
Taking the Laplace transform with respect to
t
of both sides of
(1.7.4) and making use of (1.7.3) we obtain after a little simplification
"0
h*(
s, z,)
lJ.(l-z)P*(S,O," )+" zR+lp*(s,R_l,,, )_ !J.zr+1P*(s,r+l, A )-z
0
0
0
°2
=
2
, " Z -(s +ll +" )z + !J.
o
0
Making the same kind of argument as in section (1.5) we arrive
at the following equations
where
ex
o
and
f30
are given by (1. 5. 9).
0
29
Similarly considering equations
(1.2.6) through (1.2.9) we ob-
tain as before
)
IJ. z r+lp* ( s, r+l, h..
'J.
-
A0 ZR+l p* ( s,R-l, .A0 )
where
(1.7.8) IJ.P*(s,r+l, A_)
= A0 f3R-r
P*(s,R-l,
'l
1
and again
Now
f3
l
is given by
1\ )
0
(1.5.9).
(1.7.6) and (1.7.8) are three linear equations in three
unknowns and hence all the unknowns can be determined.
both h*(s, z, 1\0)
and h*(s, z, AI)
That is,
can be determined.
The
generating function of the Laplace transform with respect to time
of the
Note:
P(t,n)t s
is obviously given by
Because of the fact that
(1.7.10)
we can get
lim
s ->
s P*(s, n)
0
=
lim p(t, n)
t
->
(Xl
(1.4.3) and (1.4.4) from (1.7.5) and (1.7.6) respectively
by means of
operation on
(1.7.10). As a check we see that if we use the same
(1.7.8) we obtain (1.4.5).
1.8 Remarks: On putting Ao = Al all the results or formulas for
a simple queue, subject to the usual restriction
p < 1
o
for the
existence of equilibrium conditions, can be obtained from this
chapter.
30
It is possible to superimpose a cost structure on the problem
and on defining a cost function, optimization may be carried out.
The parameters R, r
and
of management and appendix
~
(a)
are assumed to be under the control
summarizes an approach to the
selection of their optimal values.
CHAPrER II
A SINGLE-SERVER QUEUE HITH CONTROLIABLE ARRIVAL
RATE - THE GEl-lERAL CASE
2.1 Introduction.
In this chapter the assumptions for the distri-
bution of the service time is the same as in Chapter I.
The assump-
tion regarding the arrival process is more general in the sense
that we allow the arrival rate"
is an integer
Let the
to take
(N+l) values, where
N
>1 •
(N+l) values of "
be denoted by "0'
"t' ...,
~
(r , R ), ••••• , (rN, RN) be N
2 2
~ 0,
r < r < ••••• < r N;
2
l
l
and r 1 < R1 , r 2 < R2, ...... , r N < RN•
(r , R ),
1 l
pairs of integers satisfying r
respectively and let
Rl < R2 < ....... < ~
Suppose at some instant of time there are no units in the system.
Units then start coming in a Poisson stream with intensity
This arrival rate
,,= "0'
" o remains unchanged as long as the number of units
in the system (/;)
s
is strictly less than R.-.
-~
VJhen
a value
P, s
assumes
"0
Rl the arrival rate changes instantaneously from
to
After the change takes place, the arrival rate will continue to be
as long as
rl+l
~
/;s < R -1.
2
If
changes back from
to
~,
peated.
.t s
goes down to
\.
r 1 then
"1 to " o and in order that " 0 change again
/;s has to grow to size R and the same process is rel
Hith "assuming a value
should /; s grow to a size R2
'J.'
'\
32
the arrival rate changes instantaneously from ~l
continues to be
goes down to
r
"'2
as long as
then
2
r2 + 1
~
A changes back to
J
to
and it
"'2
R,
R -1. If
s -< 3
whereas if
J
\
s
s
reaches a value
R the arrival rate changes instantaneously from
3
vlhen Achange s from
to A , the arrival rate
\ to
3
will continue to be A as long as r +l
R,s ~ R -l and
4
3
3
J
goes down to r
will change back to A if and only if
s
2
3
and will assume a value
should R, s reach a size R4 and so
on. When A assumes a value AN..1 we have r + 1 < I < R-l • If
N-l
s N
r
_
,
goes
down
to
a
value
A
changes
from
to
Is
\-1
\-2
Nl
".
\
:s
'4
grows to size ~ A will change instantaneously to
s
~ and at this last stage it will continue to be
AN unless
whereas if
.8
goes down to
r
when
N
A changes back to A _ ; that is, A =
Nl
> r N + 1.
s In this chapter we shall restrict ourselves strictly to equili-
for all
I
brium conditions and we need (as in Chapter I) the restriction
AN <
1.1
for the existence of stationary probabilities.
Vlhen we
draw the state diagram of this process we see that there are
N
loops which mayor may not overlap depending upon the values of the
= 1,2, •••
r.~ 's and R.~ 's (1
N).
In every case we have (N+l) sets
of equations corresponding to the (N+l) values of
again denote by
(n
= 0,1,2, ••• ;
(n, Ai)
1
the state that
= 0,1,2, •••
N).
Js
=n
A •
and
We shall
A
= A.
~
33
2.2 The distribution of
(I)
(0
t a, the number of units in the system:
The case of N non-overlapping loops:
This is the case where
S r 1 < Rl < r 2 < R2 < .... < '~-l < r N <~).
The diagram of
the process for this case is as follows
Diagram 2.1
The stationary probabilities
(N+l) sets of difference equations.
corresponding to
(i
~
P(n,
~)ts
satisfy the following
We shall refer to the set
as the i-th set and denote it by
(i)
= O,l, ... ,N).
Set (0)
""op(o, "")
0
(2.2.2)
~P(n+2,
""0) -
=0
(~ + Ao)P(n+l,
""0) + ""oP(n, ""0) =
°
(n = 0,1, ••• ,rl -2)
~ P(rl+l, ">L)
""0 P(rl-l, ""0 )= 0
°1 + ~P(rl+l, ""0 )-(~+ ""0 )P(rl , A)+
0
(2.2.4)
~P(n+2, A ) - (~+ A )P{n+1, A) + AP(n, A)
o
000
0
=0
(2.2.5)
Set (i)
(2.2.6)
i
= 1,2, ••• ,
~P{r.+2,
N-1
A.)
- (~+ A.)P{r.+1;
A.)
~
~
~
~
~
(n
(2.2.8)
~P{R.+1,
~
= r i +1,
=
0
r.+2,
••• ,R.-2)
~
~
A.)
- (~+ A.)P(R.,
A.)+
A.P{R.-1,
A.)+
A.~-1P{R.-1,A.
1)
~
~
~
~
~
~
~
~
~-
=
~P(n+2,
(2.2.10)
A.) ~
(~+
A. )P{n+l, A.) + A.P{n, A.)
~
~
~
~
=0
f.LP(r+l, A. 1)+ ~P(r+1, A.)-(~+A.)P(r. I,A.)+A.P{r-l, A.)
i+l
~+
i+1
~
~
~+
~
~ i+l
~
(2.2.11) ~P{n+2, A_)
- (~+ A.)P{n+1,A.)
+ A.P{n,
A.)
'1
~
~
~
~
(n
=0
= r. l' r+1, •••• , R - 3)
~+
i+1
i+1
(2.2.12)
Set
0
(N)
(2. 2.13)
~ P(rN+2, AN) '" (~+ AN)P{rN+l, AN) = 0
(2.2.14)
~ P{n+2,
'N) -
(~+ AN)P(n+1,
(n
Am)
= r N+1,
+ V{n, AN)
=0
r N+2, ••• , R -2)
N
=0
35
°
(2.2.l5)
Il P{11r+l , AN) - (1l+~)P(11r,AN)+V{11r-l, \)+\_lP(RN1, \-1) =
(2 2.16)
IlP(n+2AN)-(Il+~)P(n+l ~)+ ~(n ~)
=°
(n ~ ~)
In solving these (N+l) sets of equations we shall deal only with
A. ~ Il
the case where
(i = 0,1,2, ••• ,N-l).
The results for some or
J.
all
~
= Il
can be obtained as limiting cases.
Notice that when we replace
r
and R by rand R respecl
l
are the same as equations (1.2.l0)
CO)
tively equations of the set
CO)
through (1.2.14) and hence the solutions to equations of the set
are given by
..,
e
(2.2.17)
\
Pen, A0 )
Pen, A0 )
=
=
p n p(o, A )
o
prl
0
r
p l
0
(n = 0,1, ... ,r, )
0
p
R
l
,
R
( pn _ p 1 ) p( 0, A )
000
0
(n=rl , rl+l, ••• ,Rl-l)
and
(2.2.18)
=
where
(i
= O,1,2, ••• ,N)
•
Next we shall solve equations of the set
(i)
for
i = 1.
Equations (2.2.6), (2.2.7) and (2.2.8) are the same
as (1.2.15), (1.2.16) and (1.2.17) when we replace
rl
and Rl
by
36
rand R respectively and therefore we have
and
The solution to (2.2.9) which is an ordinary homogeneous difference equation of the second degree with (2.2.20) as the initial
condition is
=
Similarly the solution to (2.2.11) with (2.2.12) as the initial condition is given by
37
Notice that we can write equation (2.2.10) as
and substituting the values of P(r 2,Al)' P(r 2-1,A1 ) and P(r 2+1,A1 )
from (2.2.21) and (2.2.22) we arrive at the relation
on using (2.2.22) for
n
= R2
- 1.
Also on using (2.2.21) for
in terms of P(rl+l,
and recall that
~).
P(rl+l,
n
=r2
we can express
P(r +1, A )
2
2
The relation is given by
~)
is given by (2.2.18).
On using (2.2.23) we obtain an expression alternative to (2.2.22)
for
Pen, ~)
(2.2.25)
and this is
Pen, ~)
=
Let us now summarize the solutio ns to equations of the set
(1)
38
,
r"
I
I
I
P(n,A )
l
=
P(rl+l, Al )
(1 - Pl )
I
\
(n=r1+l, r l +2, ••• ,Rl )
I
1
(2.2.26)
n-r
(1 - P 1)
l
P(n'Al)
=
P(rl+l,Al)
Rl-rl
n-l).
(1 - Pl
) PJ.
(1 - Pl)
(n=R , Rl+1, ... , r 2 )
l
\
P(n,~)
,
=
(ncr 2, r 2+l,
R
r
1
l
P
- P
where
l
"'j
R2-1)
l
Rl+r1
P
l
and
Recall that this last relation is given by the set (0).
Notice that the sets of equations (i) (i = 1,2, ••• N-l)
are
exactlY the same except for the differences in the subscripts of the
parameters.
P(rl+l,
set
(1)
"J..)
In solving the set
= P oP(R1-1, AO )
(1) we made use of the information
given by the set
give.s the information P(r2+1, A2) =
(0).
Similarly
Pl P(R 2-l, Al )
for
39
solving set (2).
In view of the remark made at the beginning of
this paragraph, clearly the solution to set (2) can be obtained from
the solution to set
(1) by merely replacing
~,
A2, Pl'
r l , Rl ,
r 2 and R2 respectively by A2' A3' P2' r 2' R2, r 3 and R •
3
information supplied by set (2) for solving set (3) is
The
P(r +1, A3) = P2P(R -1, A2)·In this way we solve equations of the
3
3
sets (1), (2), ••• , (N-l) successively. Furthermore, we notice that
equations of the set
(N)
are the same as equations (1.2.15) through
(1.2.18) when we replace
Ao' Al , rand R respectively by
AN_i' AN' r N and RN and hence the solution to equations of
the set (N) can be obtained from the solution to equations (1.2.15)
through (1.2.18).
We now write in condensed notation the solutions of the equations
of the sets
(i
= 0,1, •••
(0), (1), ••• , (N) under the restrictions
N-l)
Pi ~ 1
and, as has been remarked before, for the exis-
tence of the stationary probabilities we need
PN < 1.
,
40
For i = 1,2, ••• ,N-1, we have
n-r.~
P( r i +1, .,."i )
P(n, Ai) =
(1 _ Pi' (1 - Pi
)(n=r i +1,r i +2, ••• ,R i )
(2.2.28)
P(n, ~)
R.-r. n-R.
P(r.+1, A.)
~
~ (
=
(1 - p. ) 1- p.~ ~ ~').P.~ ~( n=~.,R.+1,
••• ,r.~+ l)'
'\~
l
l
Pen, A.)
l
=
(n=r i +1 , r + 1,
i+1
... , Ri+1- 1) •
Finally, we have
rPen, ~)
P(r N+1, ~)
n-rN
= . (1 - PN' (1 - P N )
(n=rN+1, r N+2,
(2.2.29)
P(n, AN)
P(rN+1, ~)
=
{I - PN J
.(1
~-rN
- PN
... , ~)
n-~
) PN
(n ~ ~)
where
P(r1+1, "1)
= p.oP(R
. 1 -1, " 0 )
~1+r1
(2.2.30)
=
per + 1, A.+1 )
1+1'~
Po
R (1 - P )p(O,,,),
r1
1
Po - Po
=
p.P(R - 1, A.)
l
1+1 .~
0
0
,
and
=
R.
r.
R. l+r i + 1
p • J.+
Pi
:L
:L
... :L
-
""1
R.+r.
p:L
P(r.+l, "A.)
:L
J.
J.
i
(1
The only unknown involved now is
= 1,2, ••• ,N-l)
p(O, \ )
later from the normalizing equation.
which will be determined
We postpone this for the time
being.
(II)
The case of N overlapping loops:
sides the general restrictions on the
That is, the case where be-
rits
and Rit s
r i + < Ri
1
The diagram of the process for this case is
section (2.1) we have an additional restriction
(i
= 1,2, ••• ,N-l).
given in
given below
o \
-) ..-\, lf~ ~
A.,
\----~_..l-_-
.••
Diagram 2.2
The sets
probabilities
(r).
(0) and
Pen,
(N)
of
~)ts
e~uations
which the stationary
satisfy are exactly the same as in case
The i-th set is given by
Set (i)
(2.2.31)
i = 1,2, ••• , N-1
IIP(r.+2,
A.) - ( IJ.+ A.)P(r.+1,
A.)
~
~.~
~
~
~
= 0
IIP(n+2,
A.)
- (IJ.+ A.~ )P(n+1, A.)
+ A~.P(n, A.)
= 0
~
~
~
~
(n = r.+1, r.+2, ••• , r - 2)
~
~
i+1
+ A.P(r - 1, ~)
~
i+1
(n
= r i +1,
= 0
r + 1, ••• , Ri -2)
i+1
(2.2.35)
IJ.P(R.+1,A.)
- (IJ.+A.)P(Ri,A.)
+ A.P(R.-1,A.)
~
~
'i
~
~
~
~
(2.2.36)
IJ.P(n+2, A.)
- {J.L+A.)P(n+1,A.)
+ A.P(n,
A.)
= 0
~
'i
~
~
~
(n=R.,R.+1, ••• ,R-3)
~
~
i+1
- (IJ.+A. )P{R. -11, A.) + A,peR - 2, A.) = O.
'i . ~+
~
~
i+1
~
43
Again we first solve equations of the set
(i) for
i
= 1.
Notice that (2.2.31) and (2.2.32) are exactly the same as (2.2.6)
and (2.2.7)
respectively when we replace
r i +l
by Ri
and hence
we have
where
P(rl+l,~)
is given by (2.2.18).
The equation (2.2.33) can be written as
Substituting the values of P(r2'~) and p(r2-l,~) from (2.2.38)
in the above equation after some algebraical steps we arrive at
P(rl+l, Al )
- .......- _....
(1(1 - p 1)
P(r2+l,~)
where
r -r +l
2 l
~
) - P(r 2+l, A2 )
is not yet known.
The solution to the difference equation (2.2.34) with (2.2.38)
for
n
= r 2 and (2.2.39) as initial conditions is given by
44
Next, equation (2.2.35) can be written as
Making use of (2.2.40) for
lation P(r1+1,\)
n
= 1\
= PoP(R1 -1,,,,o)
and n
= R1 -1
and also the re-
we arrive at
The solution to (2.2.36) "lith (2.2.40) for n
= R1
and (2.2.41) as
initial conditions is given by
r
P1
1
R
1
- P1
R +r
1 1
P1
+
Per 2+1 , "'2)
(1- P1)
(p
n
P1
n-r
1
2. 1 )
Lastly, equation (2.2.37) leads to
SUbstituting the values of P(R2-1''''1) and P(R2-2,~) from (2.2.42)
into the above equation and after some a1gebraica1 steps we obtain
.
45
On substituting the value of P(rl+l, Al )
(2.2.42), it can be shown that .
P(r 2+1,A2 )
(l - Pl )
n-R
( ~
from (2.2.43) into
2 _ 1)
from which we also have the relation
Thus we have completely solved equations of the set (1) and note that
for solving set (1) the only information we need from the set (0) is
the relation P(rl+l, ~)
=
poP(Rl-l,~).
Similarly, we use the
relation P(r2+1, \) = ~P(R2-1,~.> for solving set (2). Making
exactly the same argument as in case (I) we give below the solutions
of the sets
(2.2.45)
(i)
(i
= O,l, ... ,N)
Pen, A0 ) = P~ P(O,AO)
(n = 0,1, ••• , r l ) ,
r
l
R
Po
(pn _ P l)p(o,.1\ )
l. Pen, \) =
R
0
0
r
o
l
l
Po
Po
(n = r l , rl+l, ••• , Rl-l) •
~.
-
For
,
i = 1,2, ••• ,N-l we have
r
P(n,
P(r.+l,A.)
n-r.
( J.) J. (1- Pi J.)
I-Pi
~) =
(n=r i +l, r i +2, ••• ,r i +1 )
i
I
(2.2.46)
/ P(n, A.)
J.
I
,
P(r.+l,A.)
n-r.
J.
J. (1
J.)
(I-P.)·
- Pi
J.
=
P(r.+ 1, A.+1 )
J.+l·~
n- r i +1 )
---:,...:;.;.;;.......,..-- ( 1 - PJ.'
(I - p.)
I
J.
(n=r. l' r + 1, ••• , R.)
J.
J.+
i+l
l
\ p(
P(r + 1, A. 1)
'\ )
n, /~J.
=
(n
Finally, we have
where
n-R.
J.+1 _ 1)
J.
_--r=~i+~l~~J._+_·_ ( P.
(I -
Pi )
= R.,
R.+l, ••• , R - 1)
J.
i+1
J.
47
,. per +1
1
l'
A)
1
=
P P(Rl-l, A )
= _p_:_l+_r_l-=R=-_(:
1
rl
J\
_ P )P(o, A ),
Po'"
0
0
and
Po
(2.2.48)
\ Per + 1, Ai +l )
i+l
\
= p.P(R
~
Ai)
- 1,
i+l
r.
R.
P.~ - P.~
R.~+1+ r.~+1
=
Pi
.
r'+ l
P .~
~
(III)
R. 1
P.~
P .:1.+
~
~
P(r. +1, A.)
~
~
~
= 1,2, ••• ,N-l)
will be determined later from the normalizing
The case of N trivially overlapping loops:
where
ri's
~
R.+r.
(i
Again p(C, A)
o
equation.
~
This is the case
=
R (1 = 1,2, ••• ,N-l) besides the restrictions on the
i
and Ri'S as given in section (2.1).
r i +l
The diagram of the process for this case is as follows
...
..
·rt I
N-I
Diagram 2.3
This case is really a special case of both (I) and (II) with the
understanding that
e~uations
r.J.+ 1
= R.
~
(i
= 1,2, ••• ,N-l).
The difference
satisfied by the stationary probabilities are the same
as in cases (I) and (II), except that instead of equations (2.2.8),
(2.2.9), (2.2.10) of case (I) or equations (2.2.33), (2.2.34),
(2.2.35) of case (II) we only have the single equation
( IJ. + A.)
p( r J.. +1' A.J. )
J.
+ A.P(r
- 1, A')
+ A.2- lP(R.-l,A.
1)
'~
. 1
2
2
22+
= o.
The solutions for the
Pen, A.)t
l. s are the same as in case (I)
except that in (2.2.28) the case wr~re n = R +1, ••••• ,r i +1 does
i
not arise, or are the same as in case (II) except that in (2.2.45)
•
•
the case where n= r + 1, •••• , R. does not arise•
i+1
l.
Having obtained the solutions for the above three basic cases
it is easy to see how to obtain the solutions for other cases that
we have not considered.
(i = O,l, ••• ,N)
P(~)
Let
denote the probability of finding
the system in a state where the arrival rate is
co
Clearly
=
It can be shown, on ~ing use of
Pen, A.)
l.
n=o
relations between Per + 1, A.+1 ), P(r.+1, A.) and
i+1
1.
1.
1.
given by (2.2.30) or (2.2.48), that in every case
p( \.)
e
A..
l.
R1 -1
(2.2.50)
peA ) =
0
P(r +1, A )
P(O,AO )
1
1
P(n,A o ) =
- (R1 -r1 )
(1- po)
(1- po)
n=o
1::
R- 1
i+1
(2.2.5 1)
P(A.l. )
P(r i +1,A )
i
Pen,
A.)
)
=
(Ri-r
=
i
l.
(1
Pi)
n=r.+
l. 1
Per + 1, A. 1)
i+1
l.+
- (R i+1 - r i +1 )
(1 - Pi)
1::
(i
00
(2.2.52)
P(AN)
=
:E
n=rN+1
= 1,2, ••• ,N-1) ,
Pen, \r) = (~-rN)
P(rN+1,1-w)
(1 - PN)
,
N
The normalizing condition requires
1 =
using (2.2.~O), (2.2.51) and (2.2.52) leads to
Z p( Ai) which on
i=o
p(o, A )
(2.2.53)
where the
o
=
1
P(ri+l, Ai)'S
are expressed in terms of P(O,Ao ) by
means of relations (2.2.30)~ Thus we can determine P(O,A) from
o
(2.2.53). We have therefore determined the probability distribution
of the number of units in the system for all cases considered.
2.3 The expected number of units in the system
(L)
s
and the corres-
ponding waiting time
for the
(W): We discovered above that the expressions
s
P(Ai)'s for all cases considered are exactly the same. The
same is true in this case too and this will be obvious from the next
section when we deal with the generating function•.
Denote by L
the contribution to
si
L
from the states (n, \),
s
where
00
Ls.
J.
=
Z
n=o
n P(n,~)
(i
= O,1,2, ••• ,N)
•
It can be shown by means of simple but lengthy algebra, wherein we
make use of the relations (2.2.30), that for all cases
R -1
1
Z nP{n,
n=o
'b)
=
Po ,
. 2
(l-Po )
51
For i
= 1,2, ••• ,N-1
R - 1
i+1
1
P(r.+1,A.)
(2.3.2) L = I:
n Pen, 'A) = -2(R i +r.)(R.-r.-1) ·(1J.
)
s J.. n=r. +1
J.
J. J. J.
- P..1.
1.
per + 1,)'.. 1)
'+1 1.+
(R. 1- Pir. 1) . 1.
2
J.+
1.+
(1 _ P.)
1.
,
00
I:
n=rN+1
(2.3.4)
Ls
nP(n,~)
=
Po
-~~2 p(O,~ )
=
(1 _P )
o
0
N [(R.-P.' 1r .)
1
( P. 1- P.)
1. 1.- 1. 2 + -2 (R.+r.)(R.-r.-1)
11
.1.=1 (1 - p. 1 )
1. 1. 1 1.
(l-Pi 1 )(l-P.)
1.1.
I:
_
(R.- p.r.) ]
J. 1. 1.
P{r.+1,
2
(1 - Pi)
1.
A.)'
where the P(r.+1,
J.
'1 s and p(O, A 0 )
A.)
1.
are known from (2.2.30) and
The expecting waiting time in the system (W ) is, on using Little's
s
formula, given by
52
vI
s
1
=
A
L8
7\ =
where
'
N
Z A. p( A . )
•
J.=o
J.
J.
2.4 The generating function of the probabilities of the number of
units in the system:
The probability of finding
n
units in the
system is given by
P(n)
N
=
P(n, A.)
Z
J.
i=o
Define the generating functions of the
P(n) r 8
and
P(n, Ai) r s
follows
00
(2.4.1)
{
h(z)
=
Z
n=o
z~(n)
00
h(z,A.)
=
J.
1:
znp(n, A.)
J.
n=o
(i = O,l,2, ••• ,N)
and recall that the following states are inadmissible
(n,A )
o
for
n < r.+l,
J.
n>R-l
i+l
(i = 1,2, •••• , N-l)
for
n < r
N+ 1
Clearly
(2.4.2)
h(z)
=
N
Z h(z, A')
J.
i=o
Using the definitions (2.4.1) by considering the difference
equations of the set
(0) for all the three cases considered in
section (2.2) we arrive in every case at the following
as
This is so because equations of the set (0) for all cases are the same
as equations (1.2.10) through (1.2.14) when we replace
r1
and R1
by rand R respectively.
Also it is easy to see that in every case equations of the set
(i)
(i = 1,2, ••• ,N-1) lead to
~ [h(Z,~)-
r.+1
Z
~ P(~i+1'Ai)] - (~+Ai)h(z'Ai)
R- 1
+ A.z[h(z,A.) - Z i+1 peR - 1,A')]
~
~
i+1
~
R.
r'+ l
+ A. 1 Z ~ peR. -1, 'A. 1) + ~z ~
Per + 1, A. 1)
~~
.~.
i+1
~+
=0 ,
which on simplification gives
r + 1
Z
i+1 P ( r + 1,
\+1)]/ [\Z2-( IJ+ \
i+1
Lastly, the set (N) of equations for all cases is the same as
equations (1.2.14) through (1.2.18) when we replace
by r
r N and
and R respectively and hence
r +1
N
(2.4.5) h(z,1N)
=
~ Z
P(r +1,A N) - AN_1 Z
N
ANz
2
- (J.1 +
R +1
N
\1) Z
P(Rml-, AN_1 )
+ J.1
~
)z+~].
e'
54
Except for the subscript 1
in r
and R equation (2.4.3)
l
l
is the same as (1.4.3) and hence as in Chapter I we arrive at the
following relation
(2.4.6)
p,P(Rl-l,A
)
o
0
=
Let us first consider (2.4.4) for
i = 1.
Making use of (2.4.6)
we can write
(2.4.7)
h(z, \) ..:
Since h(z,A ) converges for all values of
z, the zeros of the de-
l
nominator of (2.4.7) must coincide with the zeros of the numerator.
But clearly the zeros of the denominator occur at
z
z=l and
= l/Pl and hence we arrive at the following equations
A1P(R2-l,Al) - ~p(r2+1,A2)
rl+l
~P(rl+l'Al)[(l/Pl)
=
0
Rl+l
R2+l
-(l/P l )
]+Al(l/P1 )
P(R2-l,Al )
r +1
- ~(l/Pl) 2 p(r +l,A2 ) = 0 •
2
Solving these two equations we obtain
55
=
Similarly considering (2.4.4) for
i=2 and making use of (2.4.6)
and making exactly the same argument as above we get
=
In this way we continue considering (2.4.4) successively for
i = 2,3, ••• ,N-l and obtain
(2.4.9)
Per + 1 ,
i+l
AJ..
+1)
=
p.P(R - 1, A.)
J.
i+l
J.
r.J.
p.
R
J.
P.J.
J.
J.
J.
(i
R.
J.
- P.
= 1,2, ••• ,N-l)
.+r.
,P(r.+l,A.)
J.
J.
•
Lastly, notice that (2.4.5) is the same as (1.4.4) when we replace N by 1
and hence we consider only the zero
occurs in the denominator of (2.4.5).
z=l which
We disregard the zero
z = lip N
56
since for the existence of equilibrium conditions we require
IIp N > 1
and h( z,
"N)
may not converge for
Iz I
> 1.
Setting the numerator of (2.4.5) equal to zero and substituting
z=l
(at which the numerator must vanish) we obtain the
relation
which is the same as
(2.4.9) for i = N - 1.
Using (2.4.6) and (2.4.9) we can now write
Rl+l rl+l
(2.4.10) h(z,:\)
o
~ (l-z)P(O, \,) + ~ P(rl+l, \)(z
-z
)
= -----2~----------
"
'0
=
z
_ W.+A 0 )z + ~
Rl + 1
(l-z)P(O, "o) + (z
rl t
- z
+.'
<)P(r +1, \)
l
z? -, :(1"..trJ
\z;+ 1
,,':rJ
p
o
r.+1 R.+1
R·+ 1
r.+ 1
(z 1 _ Z 1 )p(r +1'''i)+(Z 1+1 _ z 1+1 )P(ri t11'''i+1)
i
2
Pi z -(1+Pi)z+1
(i
= 1,2, ..... ,
N- 1)
Taking the limits as z --> 1 of (2.4.10), (2.4.11) and (2.4.12)
and using l'Hospita1rs rule we obtain the expressions for the
p( Ai) t s since
57
P{~i)
=
h{z, ~i)
lim
z -> 1
and as expected these expressions are the same as (2.2.49),
(2.2.50) and (2.2.51). Thus we have determined the generating
=
N
E h(Z'~i)' where the h(Z'~i)'sare given by
i~"
.
(2.4.10), (2.4.11) and (2.4.12). The relation between the
function h(z)
.
peri +l'~i)'s and p(O'~o) is obtained from (2.4.6) and (2.4.9)
and P(O'~o) is determined from the normalizing condition (2.2.52).
It may be noted that (2.4.10) and (2.4.11) are true even if
so~e.
or all Pi's (i
= 0, 1, ••• , N-1)
are equal to unity wheras
(2.4.6) and (2.4.9) will have to be modified by taking limits.
Having determined h(z)1 we can find the moments of the
distribution of the number of units in the system by usual
methods.
2.5 The Laplace transform of. the distribution of the busy
;period.
As in the case of the generating fu:nction, we get the
same expression for the Laplace transform of the busy period
distribution for all cases considered in Bection 2.2.
We shall
therefore deal only with case (I), that is, the case of N nonoverlapping loops.
As in Bect+on (1.5) we consider the modified
process which ceases as soon as the number of units in the system
falls to zero.
The initial condition is that at the beginning of
the period the system is in a state
(l,~).
o
Again we denote by
Q(t,n''''i) the probability of the state (n'~i) at time t.
The (N+l)
58
sets of differential·difference equations which this process satisfies arer
Set (0)
Qf(t,l,~
) =~~ + ~ )Q(t,l,~ ) + ~Q(t,2,~ )
0 0 0
0
QI (t,n+l,~) =.(~+~ )Q(t,n+l,~ ) + ~(t,n+2,~ ) + ~ Q(t,n,~ )
·00
(2.5.2) Q'(t,rl'~o)
0
000
= -(~+~)Q(t,rl'~O)
+ ~(t,rl+l,~o) + ~oQ(t,rl-l,~o)
+ J.lQ(t,rl+l'~l)
Q'(t,n+l,~ ) = -(~+A )Q(t,n+l,~ ) + ~Q(t,n+2,~ ) + ~ Q(t,n,~ )
·00
0
000
Q'(t,R -1,~ ) = -(~+~ )Q(~,R -l,~ )
1
Set (1)
i
= 1,
0
2, ••• , N-1
0
1
0
+
~
o
Q(t,R
1
-2,~
0
) ,
59
Q'(t,n+l,Ai )
= -(~+Ai)Q(t,n+lIAi)
+ ~Q(t,n+2,Ai) + AiQ(t,n,Ai )
(n
Q'(t,ri+1,Ai )
= -(J.L+Ai)Q(t,ri+1,Ai )
= Ri ,
Ri+l, ••• , r i +1-2)
+ ~(t,ri+l+l,Ai)
+ AiQ(t,ri+1-l,Ai ) + ~(t,ri+l+lIAi+l)
Q'(t,n+l,A )
i
= -(J.L+Ai)Q(t,n+l,Ai )
(n
Q'(t,Ri+1-l,Ai )
+ J.LQ(t,n+2,Ai ) + AiQ(t,n,Ai )
= r i +1,
= _(~+Ai)Q(t,Ri+l-l,Ai)
ri+1+l, ••• , R1+1-3)
+ hiQ(t,Ri+1-2,Ai) ,
:Set (N)
Q'(t,n+l,A )
N
= -(~+AN)Q(t,n+l,~)
(n
Q!(t'~IA.N)
= -(J.L+A.N)Q(tl~IAN)
+ ~Q(t,n+2,hN) + ANQ(t,n,hN)
= rN+l,
••• , RN-2)
N+2,
P
+ ~Q{t,~+lIAN) + ANQ(t,~-l,A.N)
+ A.N_IQ(tl~-llA.N_l}
Qt(t,n+l,~)
= -(J.L+AN)Q(t,n+l,~)
+ ~Q{t,n+2,AN) + ANQ(t,n,~)
(n ~~)
•
60
Introduce the partial generating functions
q(tlz,A)
o
q(tlz , A )
i
Rl-l
= E znQ(tln,A)
n=l
0
Ri+l-l
= E znQ(tln,Ai)
(i
= 1,
2 , "'1
N-l)
and because of the initial condition mentioned above we have
q(O,
Z,
A)
o
=z
q(O, z" ~i) = 0 "i ~ 0 •
ApplYing (2.,.5) to all equations of the (N+l) sets above
except the first equation of the set (0) I that is" Equation (2.5.1)"
and then taking Laplace transforms with respect to time t and making
use of
( 2.5.7 )
(2.5.6), we obtain as in Section 1.5
'\)
~ ( S,Z""O
Rl+l
r +1
l ~(s r +1 ~ )_z2
~zQ*(S"l,A 0 )+A0 z
Q*(s"Rl-l,~0 )-~Z
.
"",-" 1
' 1
= ----..;.....;..;:~;:;'2----=--..=------..:-...--=--
Az •
o
(s+~+A
0
)z +
~
61
Ri+l+l
+ ""i z
Q*(s.R
' i+l-1,"")"
i
(i
= 1,
2, ••• , N-l)
2
Let the zeros of the quadratic ""iz - (s+Il+""i ) z + Il be denoted
byai(s) and ~i(s):
(i
= 0,
1, 2, ••• , N)
Where in (2.5.10) we choose that value of the square root for
which the real part is positive.
Making exactly the same argument as in Section 1.5, that is, tha.t
the zeros of the denominator must coincide with the zeros of the numerator in (2.5.7), (2.5.8) and (2.5.9) and further that we consider only
those zeros that lie inside the circle of convergence we arrive a.t the
62
following equations
R
l'
R
r
~Q*(s,l,~
)+~a ~*(s,Rl-l,~0
)-~ ~(s,rl+l'~l)
=a
.
000
0 0
~*(s,l,~0
)+~ 0
~ ~*(S,Rl-l,~
)-~~ ~*(s,rl+l'~l)
000
.
.~
~
~B
~i~Q*(s,ri+l'~i)-~i_l~iQ*(S,Ri-l'~i_l)+~i~i
Q*(s,Ri+l-l'~i)
r i +l
-~~i
(i
r
= ~0
~
Q*(s,ri+l+l'~i+l)=O
= 1,
2, ••• , N-l)
.
fJ.~NNQ*(s,rN+l'~N) - ~N-l~N Q*(s,~-l'~n_l) = 0 •
(2.5.11) is a set of (2N+l) linear equations in (2N+l) unknowns
and hence
Q*(s,l,~ )
.
o.
can be determined.
But the p.d.f. of the length of the busy period fb(t) is clearly
given by
fb(t)
= QI(t,O,~o)
= fJ.Q(t,l,~o ),
That is
on using (2.5.1).
on taking Laplace transform, where Q,*(s,l,A ) is determined from
..
.
0
(2.5.11) • Thus we have shown how in principle to obtain the Laplace
transform of the distribution of the busy period.
In practice it
may be very complicated.
2.6 The Laplace transform of the distribution of time the system
spends in the subset of states {(n" Ai) }, ,( 1
= 0,
goes out of the subset for the first time.
Clearly the problem for
the two cases i
=0
specifically" for i
and i
= N is
= 0 the
1, .... , N) before it
the same as in Section 1.6.
More
result is given by (1.6.1) with rand R
repla~e~
by rio and R respectively and for i = N the result is given
l
by (1.6.3) with I3l "r and R replaced by {3N" r N and ~ respectively.
We shall now deal with the case where i ~ O"N. Without any loss
of generality let us consider the case i
=I
1.
That is, we are
interested in the distribution of time the system spends in the
subset of states {(n, A )} before it goes out of the subset for the
l
first time either to the subset {(n,A ) lor the subset {(n,A )}'
2
o
Recall that for the subset {(n"A »)" n goes fram rl+l to R2-l.
l
Let us denote by f (t) the p.d.f. of the required distribution
m
.
given that initially it started from the state (m"A ). On using
l
the notation e(j.1.,w...l) for the exponential distribution with parameter
j.1.,w...1 it is easy to see that the folloWing equations are true :
(2.6.1)
64
}..
lJ
f n+1 = lJ+X
l
e(lJ+}..l) efn + --1..
lJ+"'l
e(lJ+}..l) @l£n+2
(n = rl+l, r l +2, ... , R2-3)
}..
f
_ lJ
R -1 - J.1+X
2
l
e(lJ+}..l) e f R 2 +-1...
lJ+"'l
2-
e(J.1+}..l)
where e stands for the convolution and we have suppressed the argument t in the f (t) t S for convenience.
n
On taking the Laplace transform with respect to t of (2.6.1),
(2.6.2) and (2.6.3) we have
f*n+·l(s)
= J.1+k1+8
}..
f*(S)
+ lJ +h1l+s f'*n-"2(s)
n
~
(2.6.6)
Notice that (2.6.5) is a homogeneous difference equation of the
second order and (2.6.4) and (2.6.6) give the initial conditions.
solution to (2.6.5) is
The
65
where
a(-$)
= 2fT(J.L+h1+S)
1
(3(s)
= 2~
·1
C
1
r +1
= P11
[(I-1+h1+S )
1 2
+ {(J.L+hl+S)2 - 4J.Lh1 1 / ]
-
{(1-1~1+S)2
-
4~1}1/~J
R -2
r +1
R -r -3{( A s )
[1-1(3 2 {(1-1~ +s)(3-I-1).h (3 1 {(I-1+h +S)_h (3}J/[(3 2 1
1-1+ l+
1
1
1
1
and
r +1
C2
= P11
r +1
[h
R -2
R
-r -3
a 1 {(1-1+h1+s)-h1a}-~ 2 {(J.L+h1+s )a-J.LJ/[(3 2 1 ((I-1+h1+s)
1
Thus, we have obtained the Laplace transform of the p.d.f. of
the required distribution.
2.7
The case when N is infinite.
'When N is infinite it is clear
that the eXistence of equilibrium conditions depends upon the
convergence of the series [see (2.2.52)]
P(ri +1"h )
i
P(O"ho )
66
and recall that because of (2.2.30) the ratio p(r.+1I A )/P(0,A )
i
. 1 0
involves only the Pi's, ri's and Rils. If the a.bove series converges
then all the appropriate formu1a.s developed in this chapter will
still hold when the number of 100pl3 is infinite.
On putting A = A =... ::A = AOI all the corresponding
N
2
l
results or formulas for a simple ~eue, subject to the usual restriction
2.8 Remarks.
Po < 1 for the existence of equilibrium conditions" can be obtained
from this chapter.
It is also possible to superimpose a cost structure on the problem
and define a suitable cost function.
The parameters involved in the
cost function are clearly r i " R and Pi (i = 1, 2, ••• , N). We shall
i
not attempt to pursue this problem further as even in the case of a
single loop we are not able to get explicit expressions for the optimal
values of the parameters.
CHAPTIDR III
A SINGLE-GOUNTER QUEUE WITH
CHANGEABLE SERVICE RATID
3.1 Introduction.
In this chapter we are considering another
queueing process under assumptions which are in some sense the
opposite of those considered in the first two chapters.
The
following assumptions are made:
~.
(a)
Arrivals constitute a Poisson process with fixed parameter
(b)
Service time is distributed exponentially, but the parameter
of the distribution,
~,
is subject to changes depending on realiza-
tions of queue size.
Let (r , R ), (r , R ), , •• , (r , ~) be N pairs of integers
N
l
2 2
l
sa~~sfying r ~ 0, r ~ 0, r < R , r < R , •• " r < ~; r < r
2
W
2
l
2
l
l
l
l
< ", < rNi Rl < R2 < ... < l\J. Beginning from the moment the
system is idle, the service rate j.L assumes a value j.Lo as long as
the number of units in the system
When is
from j.L
o
service
<Ks)
is strictly less than Rl ,
assumes a value R , the service rate changes instantaneously
l
to j.Ll and as long as i satisfies r l + 1 < i < Rn-l the
s
- s- t:.
rate will remain constant at j.Ll' Should is go down to
r , j.L will change back from j.Ll to j.L and in order that j.L changes
l
· 0 0
to j.Ll ag~in is has to grow to size R and the same process is
l
repeated, With j.L assuming a value ~l should is reach a size R
2
68
~
will change instantaneously from
~lto ~2
where the service
rate will stay constant as long as Ks satisfies r 2 + 1 ~ is ~ R -1.
3
It Will change back to ~l when K drops to a size r 2 but will take
s
a value ~3 when Ks attains a size R • The case here is similar;
o3
~ = ~o3 as long as r 3 + 1 ~ Ks ~ R4-1 and ~ will change to ~2 when
Ks = .r~
but Will assume a value ~4 when
lS
reaches a value R4 and
.
so on. When ~ = !J,N 1 we have r + 1 < 1 < R_-l. If K drops down
s
N-l - s - -N
to a size r N_l , ~ will change from ~N-l to ~N-2 whereas if is grows
.J
to a size
will change instantaneously to
~, ~
~
stage there will be no further change in ~ unless
a size r N when
Ks-> rN+l.
~
changes back to
~N-l.
That is,
and at this last
Ks 'goes
~= ~
down to
for all
Under the above model, we are interested in finding the stationary distribution of the number of units in the system (and also the
generating function) and its expected value, the expected waiting
time in the system, the Laplace .transform of the distribution of the
busy period, etc.
It will be seen later that for the eXistence of
steady state conditions we need the sole restriction h <
~N.
As in Chapter II we shall restrict ourselves to three cases
only, that is, the case of N non.overlapping loops, the case of N
overlapping loops and the case of N trivially overlapping loops.
The soluiion to other cases can easily be inferred from the solu...
tions of the above three cases.
in
We shall deal with the first case
detail whereas for the second and the third we shall simply
write the solutions directly as the mathematical techniques are very
similar to those
that
is =n
and ~
in Chapter II.
=~i
(n
= 0,
We de~ote by (n, ~i) the state
1, 2, ••• j i
= 0,
I, 2, ••• , N).
By Virtue of our assumptions the following states are clearly
inadmissible
(n, ~i) for n < ri+l, n > Ri+l-l
(i
(n,~)
=1,
2, ••• , N-l)
for n < rN+l
•
3.2 The distribution of i a , the number
of units
in the system.,
.
.
(I) The case of N non-overlapping loops, that is, the case where
(o ~ r l < R1 < r 2 < R2 < ••• < R _ < r N < ~).
Nl
the process for this case is given below:
The diagram of
-<'+1
N
R",
~A.JlJNf
/
Diagram 3.1
70
Denoting by P(n" Jl ) the stationary probability of the state
i
(n, Jl ), the system satisfies the following (N + 1) sets of
1
difference equations. We will denote by (1) (i = 0" 1, ••• , N)
the set corresponding to Jl •
i
Set (0)
Jl0
pel, Jl 0
) - ~ p{o,
Jl )
0
(3.2.1)
=0
Jl0
P(n+2, Jl0
) - (Jl +~)
P{n+l , Jl ) + ~ Pen, Jl )
0 0 0
~ P(n+2, Jl ) - (Jl +~)P(n+l, Jl ) + ~ Pen, Jl )
000
Set (i)
i =
o
0
=0
=0
1, 2, ••• , N-l
Jli P(n+2 , Jl i ) - (Jli+~)p(n+l, Jli ) + ~ Pen, Jli ) = 0
(n =r +1, r +2)
i
i
u.,
R -2)
1
71
~iP(n+e, ~i) - (~i~)p(n+l, ~i) + A Pen, ~i)
=a
(n = Ri , Ri+l, ••• , r i _1-2)
Set (N)
~~(n+2, IJ. ) - (IJ.N+A)P(n+l, ~N) + A Pen, IJ.N)
N
(n
(3.2.15)
= rN+l,
=a
r N+2, ••• , ~-2)
IJ.rr(~+l, IJ.N) - (~N+A)p(l\v' ~N) + AP(~-1, ~N)
+ A P(~-l, IJ.N_l ) =
~r(n+2, IJ. ) - (~N+A)p(n+l, ~N) + A Pen, ~)
N
a
=a
(n ~~) •
We shall solve the above (N+l) sets of difference equations
and express all Pen,
~i)'s
in terms of
pea,
)
~
o.
determined later from the normalizing equation.
which will be
Furthermore we
shall only deal with the case A ~. 1J.1 (i = 0, 1, 2, ••• , N-l). The
72
resUlts for some or all A. =
~i
can be obtained as limiting cases.
Equations (3.2.2) of the set (0) is a homogeneous difference
equation of the second order with (3.2.1) giving the initial condition.
The solution is given by
where
(Note that b and ~i introduced in Chapter II have the same meaning"
i
but for different processes.)
Similarly the solution to (3.2.4) with
(3~2.5)
as the initial condi-
tion is
r
P(n"
~
o
)
b
=
r
b
o
l
l
R
- b
..
(b~ - bol)P(O" ~O)(n=rl"rl+l~••• ,Rl-l).
o~_
R
l
0
SUbsti~u~ion of p(r1+l." ~o)" p(rl" ~o) and p(rl-l" ~o) from (3.2.18)
and (3.2.17) into (3.2.3) leads to
b
=
l
r~
b
o
b
R1+rl-l
o
. R
1
- b
(1 - b )P(O" ~ ) •
0
This formula may be compared with formula (2.2.18).
0
o·
73
Next we shall solve equations of the Set (i) for i
= 1.
The
solution to the difference equation (3.2.7) with (3.2.6) as the
initial condition is
SUbstit~t~ng the values of peRl' ~l.) and P(Rl-l, 1J1) from
(3.2.20)
into (3.2.8) and making use of (3.2.19), we arrive at the relation
Now (3.2.21) serves as an initial condition for equations (3.2.9),
the solution of which is
The solution to (3.2.11) with (3.2.12) as the initial condition is
SUb6titut~ng the
1), P(r2, 1J1) and p(r2-l, 1J1)
into (3.2.10) we obtain on using (3.2.22)
values of P(r2+l,
1J
74
where p(rl+l, ~l) is given by (3.2.19).
Making use of
(3.2~24)
we can write
(3~2.23)
in an alternative form
Thus we have solved equations of the set (1) completely.
that the set of equations (i), (i
=1,
Notice
2, •• ~, N-1) are exactly the
same except for the differences in the subscripts of the parameters.
In solving the set (1) we made use of the information P(rl+l, ~l)
= blP(I\-l, ~o) given by the set (0).
Similarly set (1) gives the
information P(r2+l, ~2) = b2P(R2-l, ~l) for solving set (2). Clearly
the solution to the set (2) can be obtained from the solution to the
set (1) by merely replacing ~l' ~2' b1,. r l , Rl , r 2 , and R2 by ~2'
J.l. , b , r , R , r , and R respectively• In SOlving the set (3) we
3 2 2 2 3
3
make use of the information p(r +l, J.l. ) = b P(R -l, ~2) an~ so on.
3 3
3
3
In this way we solve all equations of the sets (1), (2), ••• , (N-l)
successively.
The solution to the (N_l)th set can be obtained from
that of the first set by merely replacing
2 , b1, r 1, Rl , r 2 ,
and R2 by J.l.N_l , ~N' bN_l , r N_1, ~-l' r N, and ~ respectively.
Furthermore, we have the information P(rN+l, ~N) = brr(~-l'~N_l)
~ll J.l.
75
th
for solving the ( N) set.
the (N) th set is given by
With this information the solution of
Summarizing the resUlts, we have for the solutions of the
= 0,
equations of the sets (i) (i
.
~p(n, ~ )
.
(3.2.28)
For i
!
0
= bDp(O,
0
r
P(n, ~o)
= 1,
2,
... ,
=
b
0
~)
1, ••• , N)
(n
0
1
= 0,
1, ••• , r )
l
R
R (bn_b 1 )P(O,~ )(n
r1
1
bo - b 0
N-l we have
0
0
0
= rl,rl+l,
••• J
R1 -1)
where
(l-b ) P(o,~
o
0
) and
R l+r. 1-1
b i+
1.+
b
i+l i
= ..,..;;;.~.;;;...--=--R1 +l
r l +l
- b
bi
i
(i
= 1,
2,
The only unknown involved now is P(O,
"'1
~
o
N-l) •
) which will be determined
later.
(II)
The case of N overlapping loops:
for this case is as given below
The diagram of the process
77
o
__'R-I
I
~--r----
--;>:>t
I
,i'o ~
A
!t,
y
,+1
Diagram 3.2
The solutions to the (N+l) sets of difference equations
corresponding to this case are as follows
78
For i
= 1,
2, ••• , N-l we have
Finally, we have
) and p(c, 1-1 ) and between
1
0
p(ri +l +l,l-1i +l ) and p(r +l,lJ ) (i = 1, 2, ••• , N-l) are the same
i
i
as given by (3.2.31).
where the relations between P(rl+l,
(III)
1-1
The case of N trivially overlapping loops:
of the process for this case is as shown: below
The diagram
79
...\.Ij;.,<-
Diagram 3.3
This is really a special case of both (I) and (II) with the
understanding that r i +l = R (i = 1, 2, ••• , N-l). The solutions
i
for the p(n,J.1i )'s are the same as in case (I) except that in
(3.2.29) the case where n
= Ri+l,
••• , r i +l does not arise, or
are the same as in case (II) except that in (3.2.33) the case where
n
= ri+l+l, ••• ,
Ri does not arise.
80
The probability
P(~i)
(i
= 0,
1, ••• , N) of finding the
system in a state where the service rate is
~i
is clearly given
(10
by P(~i)
= E P(n'~i).
On making use of (3.2.31), it can be shown
n=O
that in every case we have
\
P(~o)
I P(~i)
J
I
I
I
\
P(~N)
=
provided bN < 1 •
co
If bN
~ 1
the series E P(n,I-L ) does not converge and there
N
n=O
will be no stationary distribution.
The normalizing condition requires 1
=
N
E p(l-L ) which, on
i=O i
making use of (3.2.25), leads to
where the p(ri+l,l-Li)'S are expressed in terms of p(o,l-L ) by means
o
of relations (3.2.31).
81
Thus we can determine p(O'~o) from (,3.2.36).
That is,
we have completely determined the probability distribution of
the number of units in the system.
3.3 The expected number
ponding waiting time
~f.
(w8 ).
units in the system (L8 ) and the corresJust as in Section 2.3 all the three
cases considered lead to the same expressions for LS and w8 •
Let L8 denote the contribution to L8 from the states
i
(n, ~i) (i
.
.
= 0, 1, ~ •• , N).
L8
i
=
Clearly
co
E n
n=O
P(n,~
) •
i
On making use of relations (,3.2.,31) it can be shown that
For
i =
1, 2, ••• , N-l
,
82
Combining (3.3.1), (3.3.2) and (3.3.3) we obtain
N
(3.3.4) LS
=
E LS
i=O i
=
b
0
(l-b)
o
2 P(O, ~o)
where P(O, ~o) and p(ri+l, ~i) are given by (3.2.31).
The expected waiting time in the system is obtained from the
3.4. The generating function of the probabilities of the number of
units in the system.
The probability P(n) of findingn units in the
system is
N
P(n) = E p(n/~i) •
i=O
Define
83
then clearly the generating function of the Pen)' s is given by
h(z)
=
co
E zn Pen)
n=O
=
N
E h(z, ~i) •
i=O
Using the definition (3.4.1) by considering the (N+l) sets of
equations of case (I), that is, equations (3.2.1) through (3.2.16)
or the (N+l) sets of equations corresponding to case (II) or Case (III),
we arrive as in Section 2.4 at the following expressions for the
h(z,~.)t6
~
For i
= 1,
2, ••• , N-l
Clearly the zeros of the quadratic AZ
2
- (~i~)z + ~i are given
and z = l/b i (i = 0, 1, ••• , N). Using exactly the same
kind of argument as in Section 2.4, that is, that the zeros of the
'by z
=1
denominators must coincide with the corresponding zeros of the
84
numerators for which the h(Z,lJ.i)'s converge, we finally obtain
(3.4.6) h(z,1J.o )
For i
= lJ.o(l-z)P(O,lJ.o )
~z
= 1,
2
Rl-l rl+l
+ 1J.1P(rl+1,1J.1)(Z
Z
)
- (IJ. +~)z + IJ.
o
0
2, ••• , N-l
(3.4.8)
where
(l-b ) P(O, IJ. )
o
0
(i = 1, 2, ••• , N-l) •
Taking the limits as
Z
---> lof (3.4.6), (3.4.7) and (3.4.8)
and on using ['Hospital's rule we obtain the expressions for the
p(lJ.i)'s since p(lJ. )
i
=
lim
z -> 1
h(z,lJ. ).
i
As expected these expressions
85
are the same as those given by (3.2.25) and the only unknown
quantity, P(O, li )' involved in the h(z, Iii) IS is determined
o
from (3.2.26).
Having determined h(z, Ii.) (i = 0, 1, ••• , N) the generating
J.
function h(z) of the P(n)l s can be obtained from (3.4.2) and hence
the moments can be found in the usual manner.
It may be noted that (3.4.3) and (3.4.4) are true even if some
or all bils (i= 0, 1, ••• , N-l) are equal to unity whereas the
relations given by (3.4.9) have to be modified on taking limits.
3.5 The Laplace transform of the distribution of the busy period.
As in the case of the generating function all the three cases considered in Section 3.2 lead to the same result.
only with case (I).
We shalltherefore deal
Again we consider the modified process which
ceases as soon as the number of units in the system falls to zero.
The initial condition is that at the start of the period the system
is in state (1, li ). Denoting by Q(t, n, Iii) the probability of
o
the state (n, Iii) at time t, the modified process satisfies the
following (N+l) sets of differential..difference equations:
Set (0)
Ql(t, 0, Ii ) = Ii Q(t, 1, Ii )
000
86
Q'(t,l,~
o)
Q'(t,n+1,~ )
o
= -(~0 +~)Q(t,l,~0
) +0
~ Q(t,2,~ )
0
= -(~0 +~)Q(t,n+1,~0
) + ~0
Q(t,n+2,~ )+NQ(t,n,~O)
0
(n =1, 2, ••• , r l -2)
Q'(t,rl'~O)
= -(~O+~)Q(t,r1'~O)
+ ~OQ(t,r1+1,~O) + ~(t,r1-1,~O)
+~lQ(t,r1+1'~1)
Q'(t,n+1,~
) =
-(~ +~)Q(t,n+1,~
·00
Q'(t,R1-1,~
)
= -(~
) +
+~)Q(t,R1-1,~
0 0 0
Set (i)
..
, i
= 1,
2, ••• , N-1
~ Q(t,n+2,~
00.
)
0
) +
~Q(t,n,~
0
)
.
QI(t,n+l,lJ. )
i
= -(lJ.i+A.)Q(t,n+l,lJ.i )
(n
+ lJ.i Q(t,n+2,lJ.i ) + AQ(t,n,lJ.i )
= ri+l,ri+l+l, ••• ,Ri+1-3)
Set (N)
QI(t,rN+l,lJ.N) = -(lJ.N+A.)Q(t,rN+l,lJ.N) + lJ.NQ(t,rN+2,lJ.N)
(3.5.4)
Q'(t,n+l,lJ.N) = -(lJ.N+A.)Q(t,n+l,lJ. ) + lJ. Q(t,n+2,lJ.N) + A.Q(t,n,lJ.N)
N
N
Q'(t,~,lJ.N)
= -(lJ.N+A.)Q(t,RN,lJ.N)
+ lJ.NQ(t,~+l,lJ.N)
+ AQ(t,~-l,lJ.N) + AQ(t,~-l,lJ.N_l)
Q'(t,n+l,lJ.N)
= -(lJ.N+A)Q(t,n+l,lJ.N)
+ lJ.NQ(t,n+2,lJ.N) + A.Q(t,n,lJ.N)
(n ~ RN) •
Introduce the partial generating' functions
88
~ q(t,z,~
o
)
00
n
=
E z Q(t,n,~)
n=l
0
=
E zn Q(t,n'~i)
00
(i
n=O
= 1,
2, ••• , N) •
The initial conditions are given by
Applying (3.5.5) to all equations of the (N+1) sets above
excepting equation (3.5.1) of the set (0) and then taking La.place
transforms with respect to time t, and making use of (3.5.6), we
obta.in as in Sections 1.5, 2.5
(i
= 1; 2, ••• , N-l)
Let the zeros of the quadratic ~z2 - (s + lJ.i+~)z + lJ. be denoted
i
by Vies) and 6i (s) where
(i
= 0,
1, 2, ••• , N)
and where in (3.5.10) we choose that value of the square root for
which the real part is positive.
Using the same kind of argument as in Section 1. 5, that is,
that the zeros of the denominator of q*(s,Z,J.L ) must coincide With
i
the zeros of the numerator for those zeros for which q*(s, Z,lJ. )
i
converges, and by considering (3.5.7), (3.5.8) and (3.5.9) we arrive
at the following set of (2N+1) linear equations in (2N+1) unknowns
r
R
I."
/
Il0
Q*(s,l,1l
)+A'J ~*(s,Rl-l,1l ) - Ill'JO~*(s,rl+l,lll)
00
0
= 'Jo
R
Il0
Q*(s,l,1l
)+AO l Q*(s,R -l,ll)
00
. l
0
= °0
(i
r
Il l 0 1Q*(s,r l +l,ll l )
0
= 1,
2, ••• , N-l)
\
But the p.d.f. of the length of the busy period fb(t) is given
by
fb(t)
= QI(t,O,llo )
= Il Q(t,l,1l ) on using (3.5.1)
o
0
or in terms of Laplace transform we have
where Q*(s,l,1l ) can be obtained from (3.5.11).
o
Thus we have shown how, in principle, to obtain the Laplace
transform of the distribution of the busy period.
91
In the special case where
~
takes on two values
~o
and
~l
there is only one pair of integers (rl,R ), r 1 ~ 0, r l < R ,
l
l
resulting in three equations in three unknowns. The first two of
these
e~ations
are exactly the same as the first two equations
of (3.5.11) and the third one can be obtained from the last
equation of (3.5.11) by merely. replacing ~N' r N, ~ and oN by
~l' r , R and 6 respectively.
In this case we have
l
1
l
(3.5.13) ~b(s)
R_-l R-l
)
R
Roo
i\(v 1. 0 1)
o 0
~
=
0
(V-~.6 1
b o = 'h/J),
'. 0 ~ 1
b
o
=1
3.6 The Laplace transform of the distribution of time the system
spends in the subset of states (n,J),i)](i = 0, 1, ••• , N) before it
goes out of the subset for the first time.
It is easy to see that
92
these problems are exactly the same as those dealt with in
Section 2.6 except for the differences in the parameters.
Thus
in this case the solution to the distribution problem connected
with the subset of states{nl~i)}can be obtained from the solution
concerning the subset of states {(n'~i)l in Section 2.6 by merely
putting
~i
=~
and
~
= ~i
for i
= 0,
3.7 The case when N is infinite.
1, ••• , N.
As in Section 2.7, the eXistence
of steady state conditions when N is infinite depends upon the
convergence of the series[see (3.2.26)]
where the ratio P(ri+l'~i)/P(O,~o) involves onl~ the bits, r i t s
and R.ts and is given by the relations (3.2.31).
~
If
the above series converges all the appropriate formulae
developed in this chapter are still valid when there are infinite
number of loops.
3.8 Remarks.
On putting ~l
= ~2
::I
•••
= ~N = ~o'
all the
corresponding results or formulas for a simple queue, subject to
the usual restriction b < 1 for the existence of eqUilibrium
o
conditions, can be obtained from this chapter.
It is also possible to superimpose a cost structure on the
problem and define a suitable cost function.
The parameters involved
93
in the cost function are clearly r , R and b (i = 1, 2, ••• , N).
i
i
i
Appendix (b) gives a brief discussion of the problem for the special
case where N = 1, that is, the case where
~
takes on two values.
CHAPrER IV
A SINGLE-SERVER QUEUE WITH VARIABLE ARRIVAL RATE
4.1
Introduction.
In this chapter we attempt to solve a single-
server queueing process under the assumptions (a) the service-time
is exponentially distributed with parameter fJ. and (b) the arrival
process is a Poisson process with variable parameter
A.
The meaning
of the assumption (b) can best be explained as follows:
The probability
of an arrival in a small interval of time of length a t
is
where A may assume more than one value
"0' A1"
"I.
S!),\T
'"'V'
Aa t
• • AN'
= O,l, ••• ,N),
If
the arrival rate is
Am
(m
the next interval of time of length
8t
the probability that
at any time
t
will change to some other value is
changed during the interval
Pmr=l
m at.
Am to
Am
Given that Am has
at, we denote by p
probability that A changes from
N
i::
K
then in
Ar •
mr
the conditional
Clearly
Prom
=°
and
(m=O,l, ••• ,N).
r=O
We shall first deal with problems arising from the above arrival
process, for example, the distribution of the inter-arrival time, the
distribution of the number of arrivals in an interval of length
t
etc.
In this whole chapter we shall consider the case where A takes only
two values
A
o
and A
l
at least in principle.
but the methods developed are perfectly general,
When A takes k values where
k
>2
we need to
solve a k-th degree polynomial the roots of which have to be found.
If A assumes only two values, \
to these two values
K
o
and
and A we have corresponding
l
K as explained in the first paragraph and
l
95
the conditional transition probabilities are given by Pol = 1, Plo= 1.
The stationary probability that the arrival rate is
A,
~
is denoted by
1(., where
~
K
(4.1.1)
o
and
4.2 The distribution problems concerning inter-arrival times: The
first problem we wish to study is
(A)
The inter.arrival time distribution:-
Let
Qt(~) (i = 0,1) de.
note the probability that there is no arrival in an interval of time
(0, t) and that the arrival rate at time
t
is
A., where it is to be
~
understood that we are concerned with only an interval of time of length
t
no matter where it is placed in the time axis.
By employing the
f
8t'
technique, clearly we have the following two differential equations:
(4.2.1)
Qt(AO ) + (AO + Ko ) Qt(AO) ·
Qi(A:i.) +
where
("J.
+ Kl ) Qt(":!)·
°
=°
l Qt(~) =
K
Ko
Qt(A O )
(4.2.1) are to be solved subject to the initial conditions
1( A0
"0
== Qo (AO ) = 1(0 ~o + 1(1'\
1(0 and 1(1 being given by (4.1.1) •
Denoting the Laplace transform of ~(Ai) with respect to
Q:(Ai)'
Re(s) > 0,
and making use of
we have on taking the Laplace transform of
(4.2.2)
t
by
(4.2.1)
96
(s + i\o + KO)Q:("~)
(4.2.3)
- '1.
from which we obtain
=
Q*(i\)
S
(4.2.4)
0
Q:( \ )
IJ.
+
(s + \
=
Q:( \)
IJ. )Q:( \) =
+
'f
·0
'f
l
'foeS + \) + "J.
-2:=:--------------+ (\ + \
S
'f
Q:(\)
-
l
=
8
2
+
~+ \
+ Ko+"J.)S+(i\o\+ i\o
Ko )
(s + i\ ) + K
0
0
(\+\ +Ko +'S.)S+(i\o\+\'S.
With some algebraical manip~ations we can express
+ \Ko )
(4.2.4)
in an
alternative form
(4.2.5)
where
\-- ---_.. -.-----_.-
(4.2.6)
with
ri
=
-.-.-.-
-------_.
~[O'o+\+Ko+l'J.) + ~ {(i\o-\)+(Ko-I'J.)}2 +4 Ko KJ.J
(i=O, 1)
rhaving the negative sign in front of the radical.
o
If we denote by
then, clearly
~(i\)for the probability of no arrival in (0, t),
Qt (i\) = Qt (i\o) + Q (~)
t
=
Q*(
s i\)
0
Making use of
(4.2.5.)
(4.2.7)
Q*(
s i\)
+
Q*(
)
s A.
.~
or in terms of Laplace transform
•
and noting that
'fo + 'f
1
= 1,
we obtain
91
(4.2.8)
where c
*
c - rO
1
Qs (A) = (r _ r ) s + r
100
= Ao "l
+ A1
Taking the inverse transform of
+ Ko + K 1 •
"0
(4.2.8) we get
(4.2.9)
c .. r
Q" (A) = (
0) e
t
r 1- r o
-r t
0
+ (1 _
c - r
0 )e
r1 - ro
-r t
1
Let F(t) be the distribution function (d.f.) of the inter-arrival
time, then clearly F(t) = 1 - Qt(A) and therefore the p.d.f. f(t) of
the inter-arrival time is given by
which is a mixture of
t~o
exponential distributions.
The expected
"
value E(t J is
g~B by
-
.
A
"A
o 1 + "l 0 +K 0 +K1
Xo X1 + X0 K1 +X1 K 9
E(t)
•
A A
lit
ahd A .A + A
o
1
0
'1.
E(t)
=
+ A1
that is,
1
(4.2.11)
Ko
To 1
+
= AoA
l +
(K
o + K1 ), ·since
(K 0
+ K 1) 'X •
1C
O
+
1C
1
= 1
98
It can easily be shown that if A.
Furthermore" if Ko
f(t) _> A e- At.
-> 03,
K
1
->
00
= A.l
O
and
1C
o
(= A. say)"t:b.era f{t)=llA.e.-l--t .
is constant, then
Let us denote by f(t,A. ) dt the probability that the length of
i
the inter-arrival time lies in the interval (t,t + dt) and that at its
termination the arrival rate is A. • Making the same kind of argument
i
as in the derivation of f(t) we obtain by considering (4.2.5)
" f( t,A. ) = [
f
0
(4.2.12)~
l'
0
(A. -r )+'1.1
-r t [
lOr e 0 +
rl-ro
"" 0
1'- l'o (A.l-r0 )+K1J rle -rlt
0
r - r
1
0
!
Let t
and t be the lengths of two successive inter-arrival
l
2
times respectively and denote by h(t ,t ) their jointp.d.f. Also let
l 2
h(tl,A.i,t2)dtldt2 denote the probability that the length of the first
inter-arrival time lies in the interval (tl , t +dt ) and that at its
l
l
termination the arrival rate is A.i(whiCh is the same as saying that
the next arrival time starts with an arrival rate
~\)
length of the next arrival time lies in (t2 ,t + dt ).
2
2
(4.2.13)
h(t l ,t2 )
= h(t l ,A.o ,t2 )
= h(t2/tl,A.o)f(tl"A.o)
where
+ h(t ,A. "t )
l
l
2
+ h(t 2/t1 ,A.l ) f(tl,A.1 )
and that the
We then have
99
By virtue of the process being Markov process it is clear that
whatever be t , the length of the next inter-arrival time depends only
l
on the arrival rate at its start, that is, h(t2/tl ,A ) depends only on
i
Ai. We can therefore write
(4.2.14)
h(t l ,t2 )
= h(t2/AO)f(tl,AO)
+ h(t 2/ Al ) f(tl,A l ).
On the other hand the conditional distribution of
does depend on both t
t~
and the arrival rate at its start.
l
given t
l
We can
write (4.2.14) as
h(t2 /tl ) f(t l ) =
[h(t2 / AO) f(Ao/t l ) + h(t2 / Al )
f(~l/tl) J f(t l ),
•
that is,
The only difference in the derivation of f(t ) and h(t / A ) lies
2 1
l
in the initial conditions. h(t / A ) can be obtained from (4.2.10) by
2 o
merely replacing t, 'To ~d 'T 1 by t , 1 and 0 respectively and recall2
ing that in (4.2.10) c = 'To~l + 'Tl \ + It o + It , that is,
l
/ c -r.· _
-r t
(4.2.16) h(t - / A ) = ( 0 o')r e .0 2 +
2 o
\ rl-ro j 0
where c o = ~l + Ito + Itl·
(4.2.17)
Similarly
_
ro t~ +
h(t2/~1) _ (Crl-_ rro 'roe)
,1
where c l =
~o
+ Ito + Itl.
0
Put
/
100
i'
J
a
0
b0
..
=
c -r
0
0
r--r
1 0
,
a
=
c1-r0
r -1'.
1 0
,
b1
1 = 1
-a
=1 -
0
b0
(4.2.18)
l
ex0
=
\
'/3o
=
,
!
\
"0 (»-l- r O)+ le l
r -r
1
0
Tl(~O-ro)+
K
O
r -r
1
0
Where To and Ta. ~r? ~iven by, ~4:~.2) and Co and c1 are ~s given in
(4.2.16) and (4.2.17) respectively.
l4aking use of (4.2.16), (4.2.17), (4.2.12) and (4.2.14) we obtain
for the joint distribution of t
1
and t
2
Integrating h(t ,t ) with respect to t and t separately we obtain
1 2
1
2
the marginal distributions of t and t and, as expected, these are
1
2
the same and given by (4.2.10). Thus we have shown that t and t are
2
l
101
identically distributed but no independent.
Now
Cov(t , t )
2
l
(4.2.20)
= E(t l t 2 )
- E(t ) E(t2 )
l
= (alaO+b ol3o ) / r o2 + (alaI + b l l3l ) / r l 2
•
+ (aOal + bol3 l + alao + bl (30 )/rOr l where X is as given in (4.2.11).
X~
It can easily be shown that
when ~o --
~
,
1
and
constant.
4.3
The distribution problems concerning the number of arrivals:
(A)
The generating function of the distribution of the number of
arrivals in
a time interval of length t.
Let pt(n, ~i) (i
= 0,1)
denote the probability that there are exact-
ly n arrivals in an interval of time (O,t) and that the arrival rate
at time t is
~.,
].
where again by time t we mean an interval of time of
length t no matter where it is placed on the time axis. By employing
the 'ot' technique, we have the following two set of differential
equations
~j:Pt(OI~~) '+ (~o +'lto)Pt(O~\) -K,lP"t(O~~l)
(4.3.1)
I
\Pt(n,~o)
+
(~o
+
Ko)Pt(n,~o)
-
=
°
KlPt(n'~l)
- ~op t (n-l,~0 )
=
°
(n ~ 1)
102
(n ~
Where (4.3.1) and
(~.3.2)
1)
are to be solved subject to the initial
conditions
(' PO(n"~1) = 0,
n >
I
~
1(.
~
po(O,\)
= 1(i
is given by (4.1.1).
Gt(z'~i)
(4.3.4)
=
(i =
°
,
0,,1)
Define
Ezn Pt(n'~1)
n=O
; ~
..>,
then (4.3.1) and (4.3.2) lead to
OGt(Z,~o)
('
dt
+ (~o -~0 z+K
)Gt(Z'~ o.".&!.
)- K~Gt(Z'~l)
. 0
.
=°
(4.3.5)~
i
I OGt(Z'~l}
\---Ot
and (4.3.3) leads to
(4.3.6) Go (z,~.) = 1( . •
~
~
Denoting the Laplace transform of
Gt(Z'~i)
with respect to t by
G*
(z,~.), Re(s) > 0, we have from (4.3.5) and (4.3.6)
s
~
103
r(s + +" - P- Z) G*s(Z,oP- ) K
~
000
It 1 G*(z, P-l )
S
= 1£0
1
*
L
_It 0 (
Gs z,P-O ) + (s + Itl + ~l -P- z)G*S (Z,A )
l
l
= 1£1
from which we obtain
But the probability of having exactly n arrivals in an interval
of time (o,t) is clearly given by
= pt(n,p-o )
+ pt(n,P-l ) •
Hence if we denote by Gt(z) the generating function of the pt(n)'s
pt(n)
* the Laplace transform with respect to t of Gt(z) we then
and Gs(z)
have
G*
(z)
S
= G*S (z,p-0 )
*S(Z,"~)
I
+ G
which on using (4.3.8) gives
s + Ito +ttl + (l£oP- l +
(4.3.9) G*(z)=
S
...;;"J
2""-' -
l£to
)(1 - z)
_
S +(Ito +ttl+(p- o+ "'l)(l-Z» s+(l-z)(K 0 P-1 +K 1"'0+"'0"'1 (l-z)}
s + c(z)
=
104
where
r' c(z)
=
K 0 +It 1
+ (1foA1
+1tl·o ){1-Z)
(4.3.10) )
l
r 1 (z) = { {
('0 ... ~)+ (h 0~~)(~-Z)}+,~(::~~~;:hO-h~)~=1~~lJ
(i =0,1)
with ro(z) having the negative sign in front of the radical.
On further simplification the expression for G* (z) given 'by t4.}.9)
.
s
can be written as
G~(Z)
(4.3.11)
Where
(4.3.12)
, and
~ Ao (z) + A1 (z) = 1
I
\
I.
A (z)
0
c(z) - ro(z)
1 (z) - ro(z)
=r
•
Taking the inverse transform of (4.3.11) we obtain the expression for the required generating function ' .
.
-r (z)
(4.3.13)
Gt ( z) = A0 ( z) e
-r (z)
t .
+ A (z) e
t
•
1
The mean and the variance of the number of arrivale in (0, t) are
0
1
respectively given by
Under the condition that
K0
= K 1 (=
K
say), we have
105
t
'(°1 Jo
e
-2t<X
)iX:
It has been verified that
(i)
= A (= Asay) implies G (z) = e - A(l-z)t J and
\
( ii )
l
-> do,
It 0
t
~l->CO
and
fiO
' ( )
-~(l-z)t
constant implies Gt z -> e
•
Similarly by separately considering G* (Z,A ) and G* (z,A ) as
1
,
s
. 0
s
giv~ by (4.3.8), we obtain
"
'
rG*s (Z,A) = R (z) s +1r o (z)
0
0
~
(4.3.16)
l
e
G: (z,A ) = Qo(z) s
1
.
1
+, r·0 (z)
1
+ R (z)
s +r1 (z)
1
1
+ Q1(z) s +r (.z)
1,
where
Taking the inverse transforms of (4.3.16) we obtain
,
f Gt(Z,A
O)
= Ro(z)e
(4.3.18) \
-r (z)t
0
-r (z)t
l Gt(z,Al ) = Qo(z)e
0
-rl(z)t
+ Rl(z)e
.
-r1(z)t
+ Ql(z)e
106
(B)
of
The generating function of the joint distribution of the number
~ivals
in two successive non-overlapping 10tervals of lengths .
t1and t 2 :- Let h(nl'~) denote the joint probability of finding
nl arrivals in an interval of time of length t and n2 arrivals in
l
the next successive interval of lengtht2 • Also let h(nl'~i,n2)
denote the probability that there are n arrivals in
l
end of t l the arrival rate is ~il and that there are
t l , that at the
n2 arrivals in
It is to be understood
the next successive interval of length t •
2
that both h(n , n2 ) and h(nl , ~i' n2 ) do depend explicitly on t l
l
and t 2 but we have suppressed them for convenience of notation.
Clearly,
=
where
h(n2/nl'~0)p(nl'~0) + h(n2/nl'~1)p(nl'~1)
p(nl'~i)
is the probability of finding n1 arrivals in t l
and that at its termination the arrival rate is ~i.
Making the same kind of argument as in section 4.2, the
conditional distribution of n given both n and
2
l
on A.i • That 1s, we can ~ite
~i
depends only
107
Define
(i = 0, 1) •
Making use of (4.3.21) it is easy to see that (4.3.20) leads
to
The ~xpressions for G(Zl'~i) (i
= 0,
1) man be obtained
from (4.3.18) by merely replacing t by t
and z by zl. Also the
l
derivation of the formula for w(z2/~i) is exactly the same as
that of Gt(Z), {see. (4.3.l3)} except for the initial conditions.
More specifically w(z2/~0) is obtained from (4.3.13) by merely
replacing t, no and 2t
l
respectively by t , 1 and O.
2
Similarly
for W(Z2/~1) we replace t, no and n of (4.3.13) by t , 0 and 1
2
l
respectively, that is
108
-r (~)t
-r (~ )t
r"W(Z2/AO) = Mo (z2)e 0
2+M1 (z2)e 1 2 2
\
i
\
-r (z2)t2
-r1 (z2)t2
\ w( z2/A~) = No (z2)e 0
+N1 , (z2)e
.
K (z2)-r (z2)
(4.3.23)
J\
where Mo (z2)
0:
r~(z2)-r:(z2)' Mo(z2)-liM1(~l.1\..L
I
!
Now we are in a position to write down the final expression
for n( zl' z2) and this is given by
+ [Ro(zl)~(z2)+QO(zl)N1(z2)]e
-r (z)t -r (z )t2
0
1 1e 1 2
+ [R1(zl)~(z2)+Ql(zl)No(z2)Je
-r (z)t -r (z )t
1 1 1e 0 2 2
where Ro (z), R (Z), Qo
(z) and
Q1(z) are
given by (4.3.17) and M0 (z),
. ,
.'
1
M1(z), No(z) and N1 (Z) by (4.3.23). Also ro(z) and r (z) are given
1
by (4.3.10).
109
On putting zl=l and z2=1 separately we obtain the generating
functions. of n and n2 respectively and each is found to coincide
l
with (4.3.13).
As a check we calcUlated E(n) and Var(n) and obtained the same
expressions as given by (4.3.14).
Also the expression for the
covariance between n and n is given by
2
l
Clearly (i) Ao
::I
A- implies cov(nl'~)
l
= 0,
and
4.4 The server's idle fraction and discussion of the queue equations.
Let P(n/\) (i
::I
0, 1) denote the stationary probability that there
are n units in the system and that the arrival rate is A- • It is easy
i
to see that the system satisfies the following two sets of equations
j.lP(l,h ) - (A- +K )p(O,A- ) + ItlP(O,A- ) =
0
o 0
0
l
0
(4.4.1)
j.lP(n+l,A- ) - (A- +K )P(n, A- ) + K:1P(n,A- ) + h P(n-l,A- )
o
0
000
0
l
=0
n>l
j.lP(l,h ) - (A. +K )P(O,A- ) + It p(O,A- )
1
1 1
1
0
0
=0
(4.4.2)
j.lp(n+l,A-1 ) - (~l+Kl)p(n,A.l) + ltoP(n,A-o ) + A-!(n-l,A-1 )
n>l
•
=0
110
Define
Making use of (4.4.3) equations (4.4.1) and (4.4.2) lead to
the following two equations
(4.4.4)
from which we obtain
r
1
where
A (z)
o
=
~(1-Z)p(0"AO)
K1z
~(1-Z)P(0"A1) A1z2_(~+Al+K1)z+~
A z2_(~~ +K )z+~
000
A (z) =
1
It
and
A(z)
=
0
Clearly
lim
z->l
~(1-Z)P(0,Al)
z
"A. Z2_(~+A +It )z~
000
It
~(l-z)p(O"A 0 )
o
K1Z
Z
h(z,Ai ) = n , where n is given by (4.1.1).
i
i
III
e·
From (4.4.5) we have
1C
o
=
b. (z)
o
lim
(form
li(z)
z->l
'\:P(O)
=--.::...._--- =
Kl(l-PO)+KO(l-Pl )
where Pi
o
0)
=Ai/~
(1
=0,
1) and P(O)
1C
p(O)
_--:;;O~
1C
O
_
(l-PO ) +1Cl (l-Pl )
= P(o,Ao )
+ p(o,A ) represents
l
the fraction of time in which the server is id;Le, that is,
(4.4.6)
= (l-p)
where p = 1:/~
and also we assume
p < 1,
state conditions.
The result (4.4.6) is, of course, obvious from
for otherwise there would be no steady
the fact that the expected number of arrivals per unit time1s 1:
(see (4.3.l4»and tre capacity of the number of services per unit
time is
~.
Unfortunately, when we perform the same operation on h(z,A )
l
we arrive at the same result, which of course, is obvious from
(4.4.4) when we put z
= 1.
and p(O,A ) separately.
l
intractable.
Thus we are unable to determine p(O,A )
o
Furthermore, at present, the solution seems
112
If we are interested only in the mean number of units in
the system we can consider a modified process where there is a
limited waiting room, of size N, say.
This means we shall have
a set of 2N+2 homogeneous linear equations in 2N+2 unknowns and
the coefficient matrix is of rank 2N+l.
For small values of N,
it is easy to get eXlllicit solution for the unknowns and for
large values of N we can at least write the solution in determinantal form.
Knowing the solution of the P(n'~i) for n = 0, 1, ••• , N,
i
= 0,
1 we can find the mean number of units in the system.
letting N ->
00
Then
the corresponding result will give the mean
number of units in the system for our original process.
This is
as far as we can go concerning this problem, at present.
It may
be noted that we are faced with the same difficulty in the problem
where the arrival rate
random variable.
~
is constant but the service rate is a
113
BIBLIOGRAPHY
. £1_7
Ba.iley, Norman Te J., "A Continuous Time Treatment of a
Simple Queue Using Generating Functions," Journal of the
Royal Statistical Society, Series B, 16 (1954), 288-291
["2J
Cox, D. R., and Smith, W. L., Queues, London" Methuen and
Co., 1961
[ )_7
Feller" William, An Introduction to Probability Theory and
Its A:pplications" (2nd ed.), New York, John Wiley and Sons,
1958
L4J
Little, John D. C., "A Proof for the Queueing Formula:
L = 'A.W,," Operations Research, 9 (1961)" 383-387
L5JMorse, Philip M., Queues, Inventories and Maintenance, New York,
John Wiley and Sons, 1958
["6_7
Saaty, Thomas L., Elements of Queueing Theory, New York,
McGraw-Hill Book Co., 1961
["7J
Yadin, M., and Naor,
P.,
"Queueing Systems with a Removable
Service Sta.tion," Institute of Statistics Mimeograph Series
No. 353, University of North Carolina, Chapel Hill" 1963.
APPENDIX
A NOTE: ON COST STRUCTURE AND OPTIMIZATION
The cost function which we construct and consider here is of
the simplest possible type:
The contributions of the various com-
ponents' to the total cost are assumed to be linear with respect to
their average values.
If the presence of a single waiting customer at the station
costs Cc per unit time, the contribution to the total cost due to
the average number of units waiting for service (L ) amounts to
s
CcLS •
First, we shall consider
(a)
the queueing process considered in Chapter I, that is,
the case where the arrival rate is controllable and assumes two
values A and A •
O
1
In this process if we assume that the management has to pay a
penalty cost of C per custo~er turned away, then the cost per unit
a
time associated with this source is C P(A )(A ·A ), where p(A ) is
1
a l
O 1
the fraction of time in which the arrival rate is A •
l
Clearly the above two costs are competing with each other and
the problem is, how to choose r,R and Al ( or 6 = 1'0·1'1) for minimizing
the total cost. This total cost is given by
115
C(r)R,6)
= CcLs+a
C·
P(~l)(~ -~1)
0
= CcLs+a
C ~ P(~1)6
where
R+r
=[pom-2{m+6~ ~o
R
po·po
{(R-por)m-1(m+6)2~(R+r)(R-r-1)6(m+o)
R+r
1
-(R-P1r)m}]![m.. (m+o)2_(R-r);OR o(m+6)], m = 1-P
o
Po-Po
and
(R-r)
(R-r)
116
As an approximation for the choice of the optimal values of r,
Rand
0, it is suggested that the cost
with respect to
C(r, R, 8) be differentiated
r, R and 8 separately and in each case set the
resulting equation equal to zero.
The following are the resulting
equations
2R+r
Po
log Po
.
2
r
R)
Po - Po
+
~(2r+l)&(m+&) - m PI
)] -
,}+r
l
-r--o-R=-- {(R--P or )m- (m+o)2+
P
-
o
0
p~+r
-C ~
a
~
0
[m- 1 (m+o)2_(R_r)
.
)]
P
[r
R &(m+&) + (R-r)o(m+&)
p -P
o
~(R+r)(R-r-l)&(m+o)-(R-P1r)m
p~+r
log Po ~
r
R2 ]
(p-p)
0
0
pR+r
0
pR+r
pr _ p
0
R &(m+5)][m5(m+&){
.
0
2R+r
R+r
Po l og Po
Po
+(R-r)
r
R 2 ) ]+[(R-r) r
R
p o - P0 )
~0 - P0
2R+r
Po logP o \
+ (R-r)8(m+o)
2 ]') = o.
r
R
(po - po)
0
R
pr _ p
0
0
R+r
Po
ma(m+o)][ r R o(m+ 6 )
P0 -P0
117
+ ~(R+r)(R-r-1)o(m+8)-(R~P1r)m }][
p R+2r1
o
[
p
og
0
(pr _ pR)G
o
{
1
2
1
(R-P r)m- (m+o) + -2(R+r)(R-r-1)o(m+o)
0
0
..
..
+
(R-)
J+[(R-r)
118
R+r
2
2
Po
1
2
- [po
m- (m+o) r
R (R..p r)m- (m.+5). +
o
Po - Po
- (R-r)
I
R+r
Po
I [m- (m+5) -(R-r) r
R 8 (m+o)][ (R-r)
\
Po - Po
1
2
- [(R-r)
By means of a computer it may be possible to obtain an approximate optimal solution.
Next we shall consider
(b) the special case of the queueing process considered in
Chapter III.
This is the case where the service rate is changeable
and assumes t,,,o values
1.1
0
and 1J •
1
In this process we assume that the additional cost per unit
time for bringing in an auxiliary server with capacity
is
Cb€ .
€
= IJI
-
The contribution to the total cost from this source is
p( I.1l)Gb € ,where p( I-':J..) is the fraction of time in which the
service rate is IJ:L. The total cost i~ therefore, given by
C(r, R,
where
€ )
=
C L + C p(J.L )€
c s
.l
b
1J
0
119
Ls
=
P(~l)
b bR+r - 1
1 0
(R-r)
b r _ bR
0
=
1-b
0
(R-r)
0
b r _ bR
0
0
b -b
o 1
1 -b
1
and b1 ;;; "I (Il + € ) •
o
The problem of finding optimum values of
b0
I:
1 - b1
0
bR+r - 1
1
where
1 - b0
.
1
r,R
ani
similar to the first one and we shall not pursue further.
€
is
© Copyright 2025 Paperzz