TRANSITION PROBABILITIES BASED ON KOLMOGOROV EQUATIONS FOR PURE BIRTH PROCESSES By Mutothya Nicholas Mwilu Reg.No: I56/70030/2011 Dissertation Submitted to the School of Mathematics of the University of Nairobi in Partial Fulfillment of the Requirements for the Degree of Master of Science in the School of Mathematics July 2013 i DECLARATION I declare that this is my original work and has not been presented for an award of a degree in any other university Signature ………………………… Date ……………….. Mutothya Nicholas Mwilu This thesis is submitted for examination with my approval as the university supervisor Signature ………………………………… Date…………………………….. Prof J.A.M Ottieno ii EXECUTIVE SUMMARY The aim of this thesis is to identify probability distributions emerging by solving the Kolmogorov forward differential equation for continuous time non homogenous pure birth process. This equation is pk,k + n s, t + λ k +n t pk,k + n s, t = λ k +n -1 t pk,k + n - 1 s, t . t This equation has been solved for four different processes. i.e. Poisson Process n , Simple Birth process n n , in Simple Birth process with immigration n n v and 1 an the Polya process n . 1 at Three different methods have been applied in solving the forward Kolmogorov equation above with the initial conditions being that pk k s,s = 1 and pk,k + n s,s = 0 for n > 0. These methods are: (1) The integrating factor technique (2) the Lagrange’s Method (3) The generator Matrix technique. The results from the three Methods were similar. In addition, the first passage time arising from the solution of the basic difference differential equations has also been derived for each of the four processes. From the Poisson process, we found the distribution of the increments to be a Poisson distribution with parameter λ t - s λ t - s ; n = 0, 1, 2, ... This is independent the initial state and n! n pk,k + n s, t = e - λ t - s depends on the length of the time interval, thus for a Poisson processes the increments are independent and stationary. The first passage distribution was e- λ t λ t fn t = n - 1! n -1 = λn - λ t n - 1 e t for t > 0; n = 1, 2, 3,..... which is gamma n, λ n iii From the Simple birth process, we found the distribution of the increments to be Negative binomial distribution with p = e- λ t - s and q = 1 - e- λ t - s n k + n - 1 - λk t - s 1 - e- λ t - s . It depends on the length of the time interval pk,k + n s, t = e n t - s and on k and is thus stationary and not independent. The first passage distribution is f n t = λ n - 1 e- λ t 1 - e- λ t n-2 ; t > 0, n = 2, 3,... which is an exponentiated exponential distribution with parameters λ and n - 1 From the Simple birth process with immigration, we found the distribution of the increments to be Negative binomial distribution with p = e-λ t - s and q = 1 - e-λ t - s n k + λv + n -1 - k + λv +n t - s λ -λ t - s pk,k+n s, t = 1-e . It depends on the length of the time interval e n t - s and on k and is thus stationary and not independent. The first passage distribution is fn t = 1 n0λ + v - λ λ ! e - n0 λ + v - λ t 1 - e nλ + v - λ e - λt n - n 0 - λt - n 0λ + v - λ From the Polya process, we found the distribution of the increments to be Negative binomial distribution with p = j+ k + pk,k + j s, t = j 1 + λas λa t - s 1 + λas and q = 1 . 1 + λat 1 + λat 1 + λat 1 a - 1 1 + λas 1 + λat k + 1a λa t - s It depends on the length of the time 1 + λat j interval t - s and on k and is thus stationary and not independent. The first passage distribution is f n t = n+ 1a 1 a n × 1 1 a λa tn -1 t + n + 1 a , t > 0, n > 0, 1 λa Pareto distribution. iv 1 1 > 0, > 0 which is a generalized a λa ACKNOWLEDGEMENT I am grateful to Professor JAM Otieno for serving as my adviser during this research. He provided a never ending source of ideas and directions. I have received much support from friends and colleagues through the years which has enabled me to reach this point. Special thanks must go to Mr. Karanja with whom I have worked with past one year. Their contribution and encouragement is highly appreciated. Finally, I must acknowledge the love and understanding extended to me by my family and particularly my wife Christine, my daughter Sasha and Melisa, they were always there to reassure me. v TABLE OF CONTENTS Declaration .......................................................................................................... ii Executive Summary ..........................................................................................iii CHAPTER ONE ............................................................................................. 1 GENERAL INTRODUCTION ...................................................................... 1 1.1 Background Information ............................................................................... 1 1.2 Stochastic Processes ...................................................................................... 2 1.3 Pure Birth Processes ...................................................................................... 3 1.4 Literature Review .......................................................................................... 3 1.5 Problem Statement And Objectives Of The Study ....................................... 5 1.6 Areas Of Application..................................................................................... 6 CHAPTER TWO ............................................................................................ 7 TRANSITION PROBABILITIES FOR A GENERAL PURE BIRTH PROCESS ....................................................................................................... 7 2.1 Introduction ................................................................................................... 7 2.2 Chapman-Kolmogorov Equations ................................................................. 8 2.2.1 Time Homogeneous Markov Process ...............................................................9 2.2.2 Time Non-Homogeneous Markov Process.....................................................10 2.3 Forward And Backward Kolmogorov Differential Equations .................... 11 2.3.1 Forward Kolmogorov Differential Equation ..................................................11 2.3.2 Backward Kolmogorov Differential Equations ..............................................12 2.4 Methods Of Solving Continuous Time Non-Homogenous Forward Kolmogorov Differential Equations .................................................................. 13 2.4.1 Integrating Factor Technique ..........................................................................13 2.4.2 Lagrange’s Method .........................................................................................14 2.4.3 Laplace Transforms ........................................................................................15 2.4.4 Generator Matrix Method ...............................................................................17 CHAPTER THREE ...................................................................................... 18 vi TRANSITIONAL PROBABILITIES BASED ON THE INTEGRATING FACTOR TECHNIQUE .............................................................................. 18 3.1 Introduction ................................................................................................. 18 3.2 Determining pk k s, t ................................................................................... 18 3.3 Special Cases Of Non-Homogeneous Markov Processes. .......................... 19 3.3.1 Poisson Process ..............................................................................................19 3.3.2 Simple Birth Process.......................................................................................23 3.3.3 Simple Birth Process With Immigration ........................................................30 3.3.4 Polya Process .................................................................................................36 CHAPTER FOUR ........................................................................................ 44 Probability Distributions For Transitional Probabilities Using Langranges Method .......................................................................................................... 44 4.1 Introduction ................................................................................................. 44 4.2 General Definitions ..................................................................................... 44 4.3 Poisson Process ........................................................................................... 45 4.4 Simple Birth................................................................................................. 47 4.5 Simple Birth With Immigration ................................................................. 51 4.6 Polya Process .............................................................................................. 56 CHAPTER FIVE .......................................................................................... 61 INFINITESIMAL GENERATOR ............................................................... 61 5.1 Introduction ................................................................................................. 61 5.2 Solving Matrix Equation ............................................................................. 62 5.3 Simple Birth Process .................................................................................. 72 5.4 Simple Birth With Immigration .................................................................. 74 5.5 Polya Process ............................................................................................... 77 CHAPTER SIX ............................................................................................ 80 FIRST PASSAGE DISTRIBUTIONS FOR PURE BIRTH PROCESSES 80 vii 6.1Derivation Of Fn t From pn t ................................................................... 80 6.2 Poisson Process .......................................................................................... 81 6.2.1 Determining Fn t For Poisson Process.........................................................81 6.2.2 Mean And Variance Of fn t For The Poisson Process ................................82 6.3 Simple Birth Process ................................................................................... 84 6.3.1 Determining fn t For The Simple Birth Process ..........................................84 6.3.2 Mean And Variance Of fn t For The Simple Birth Process .........................86 6.4 Pure Birth Process With Immigration ........................................................ 92 6.4.1 Determining Fn t For Pure Birth Process With Immigration .....................92 5.4.2 Mean And Variance ........................................................................................95 6.5 Polya Process .............................................................................................. 96 6.5.1 Determining fn t For The Polya Process ....................................................96 6.5.2 Mean And Variance Of fn t For The Polya Process .................................99 CHAPTER SEVEN .................................................................................... 101 SUMMARY AND CONCLUSION ........................................................... 101 7.1 Summary.................................................................................................... 101 7.1.1 General Kolmogorov Forward And Backward Equations ..........................101 7.1.2 Poisson Process .............................................................................................101 7.1.3 Simple Birth Process.....................................................................................102 7.1.4 Simple Birth With Immigration ....................................................................102 7.1.5 Polya Process ................................................................................................102 7.2 Conclusion ................................................................................................. 103 7.3 Recommendation For Further Research.................................................... 103 7.4 References ................................................................................................. 104 viii CHAPTER ONE GENERAL INTRODUCTION 1.1 Background Information Consider a case of infectious disease transmission. When an infected person makes contact with susceptible individual there is potential transmission. A susceptible person makes the potential transition to infected person through a contact with infected persons. A newly infected person then makes further transitions to other infected states with the possibility of removal. The removal stage represents death caused by the diseases or eventual recovery. Those individual who have recovered reenters the susceptible stage. The average number of people that an infected person infect during his or her infectious period is the reproductive number and the mean number of affected individuals is the outbreak size. These transitions, for both susceptible and infected persons evolve over time. Such complex process can be described mathematically through what is called the continuous time stochastic processes. Banks loan money to customers on trust that they will payback. The financial circumstance for the customer servicing loans may remain change within the loan period. If the financial status improves or remains stable the customer repays the loan without default and the loan status is said to be always current. When customers makes a full and final settlement before the end of the loan period attrition is said to have occurred. If the customer is financially constrained he defaults the loan. A customer who has default makes further transitions to other higher default states, early delinquency, mild delinquency and hard delinquency with the possibility paying the loan to fall back to current or lower delinquency status. When the customer defaults for a very long period, the loan is considered irrecoverable and bank write it off as bad debt and customer is removed from the loan book to written off book. In addition some customers die before clearing the loan. This process can be modeled using continuous time non homogenous Markov process with discrete states. When pollen grains are suspended in water they move erratically. This erratic movement of the pollen grain is thought to be due to bombardment of the water molecules that surround the pollen grains. These bombardments occur many times in each small interval of time, they are independent of each other and the impact of a single hit is very small compared to the total effect. This suggests that the motion of pollen grains can studied as a continuous time 1 stochastic process. A botanist, Brown was the first to observe it in 1927 and named the motion ‘Brownian motion’. 1.2 Stochastic Processes Definition and classifications A stochastic process is a family of random variables X t ; t 0 on a probability space with t ranging over a suitable parameter set T (t often represents time). The state space of the process is a set S in which possible values of each X t lie. This X t can be either discrete or continuous. The set of t is called parameter space T. The parameter t can also be either discrete or continuous. The parameter space is said to be discrete if the set T is countable otherwise it is continuous. Thus, we have the following four classifications of stochastic processes. 1 Discrete Parameter and Discrete state 2 Discrete Parameter and Continuous state 3 Continuous Parameter and Discrete state 4 Continuous Parameter and Continuous state Further, stochastic processes are broadly described according to the nature of dependence relationships existing among the members of the family. Some of the relationships are characterized by (i) Stationary process A process X t , t T is said to be a stationary if different observations on time intervals of the same length have the same distribution. i.e. For any s, t T, X t + s - X t has the same distribution as X s - X 0 . (ii) Markov Processes Markov process is a stochastic process whose dynamic behavior is such that probability distribution for its future development depends only on its present state, but not on the past history of the process or the manner in which the present state was reached. This property is referred to as the Markov or the memory less property. A Markov chain is a discrete time space Markov process with discrete state space. Markov jump process is a continuous time space Markov process with discrete state space. 2 A Markov process is said to be time homogenous if the behavior of the system does not depend on when it is observed. In a time homogenous Markov process the transition rates between states are independent of the time at which the transition occurs. On the hand Markov process is said to be time non homogenous if the behavior of the system is depended on the time when it is observed. 1.3 Pure Birth Processes Pure birth process is a continuous time, discrete state Markov process. Specifically, we deal with a family of random variables X t ; 0 < t < where the possible values of X t are non negative integers. X t represents the population size at time t and the transitions are limited to birth. When a birth occurs, the process goes from state n to state n 1. If no birth occurs, the process remains at the current state. The process cannot move from higher state to a lower state since there is no death. The birth process is characterized by the birth rate λ n which varies according to the state n of the system. The birth death process is a continuous time markov process where the states are the population size and the transition are restricted to birth and death. When a birth occurs the process jumps to the immediate higher state. Similarly when a death occurs the process jumps to the immediate lower state. In this process birth and death are assumed to be independent of each other. This process is characterized by birth rate and death rate which vary according to the state of the system. Pure birth process is birth death process with death rate equal to zero for all states. 1.4 Literature Review The negative binomial arises from several stochastic processes. The time - homogeneous birth and immigration process with zero initial population was first obtained by McKendrick (1914). 3 The non - homogeneous process with zero initial population known as the Polya process was developed by Lundberg (1940) in the context of risk theory. Other stochastic processes that lead to negative binomial distribution include the simple birth process with non – zero initial population size (Yule, 1925; Furry, 1937). Kendall (1948) considered non homogeneous birth – and – death process with zero death rate. He also worked on the simple birth – death – and immigration process with zero initial population (Kendall 1949). A remarkable new derivation as the solution of the simple birth – and – immigration process was given by Mckendrick (1914). Kendall (1948) formed Lagrange’s equation from the differential difference equations for a distribution of the population, and via auxiliary equation, obtained a complete solution of the equations governing the generalized birth and death process in which the birth rate and death rates may be any specified function of time. Karlin and Mcgregor (1958) expressed transitional probabilities of birth and death processs (BDPs) in terms of a sequence of orthogonal polynomials and spectra Measures. Birth rate and death rate uniquely determine the unique measure on the real axis with respect to the sequence of orthogonal polynomials. Their work gave valuable insights about the existence of unique solution of a given process. Gani and swift (2008) attempted to derive equations for the probability generating function of the Poisson, pure birth and death process subject to mass movement immigration and emigration. He considered mass movement immigration and emigration as positive and negative mass movements. The resulting probability generating functions turned out to be a product of the probability generating functions of the original processes modified by immigration process. Simple birth process was originally introduced by Yule (1924) to model new species evolution and by Furry (1937) to model particle creation. Devaraj and Fu (2008) developed a non-homogenous Markov chain to describe deterioration on bridge elements. They showed that the new model could better predict bridge element deterioration trends. Wolter et al (1998) presented and compared three different uniformization based algorithms; base algorithm, correction algorithm and interval splitting algorithm, for numerically computing transient state distribution for non homogenous model. They investigated the 4 numerical solution of the transient state probability vector of non homogenous markov process by algorithms that are based on unifomization by Grassmann(1991) and Jensen (1953). The three algorithms numerically solved the integral equations that constituted the uniformization equation as derived by Dijk(1992). 1.5 Problem Statement and Objectives of the Study Problem Statement Most of the literatures concerning pure birth processes concentrate on deriving and solving basic difference differential equations for time homogeneous processes and rarely do they present non homogenous birth processes. The few literatures that have presented the non homogeneous birth processes only sketch them in outline form or scatter details in different sections. In the analysis, emphasis is often laid on the steady state solutions while transient or time dependent analysis has received less attention. The assumptions required to derive steady state solutions are seldom satisfied in analysis of real systems. Some systems never approach equilibrium and for such systems steady states measures of performance do not make sense. In additions, systems with finite time horizon of operations, steady state results are inappropriate. Generally, in real life situation, knowledge on the time dependent behavior of the system is needed rather than the easily obtained steady state solution. Objectives The aim of this thesis is to derive and solve differential equations for continuous time non homogenous pure birth process by applying four alternative approaches; Iteration method, Kolmogorov equations and solving directly the Lagrange partial differential equation for the generating function by means of auxiliary equations, Laplace transform method and the generator matrix approach. It is hoped that this will demonstrate to statistics scholars the variety of mathematical tools that can be used to solve non homogenous pure birth processes. At the same time it is also hoped that students of mathematics will find in this processes interesting application of standard mathematical methods. 5 1.6 Areas of application Pure birth and death processes play a fundamental role in the theory and applications that embrace population growth. Examples include the spread of new infections in cases of a disease where each new infection is considered as a birth. Pure birth and death processes have a lot of application in the following areas. (a) Biological field When an infected person makes contact with susceptible individual there is potential transmission. A susceptible person makes the potential transition to infected person through a contact with infected persons. A newly infected person then makes further transitions to other infected states with the possibility of removal. The removal stage represents death caused by the diseases or eventual recovery. Those individual who have recovered reenters the susceptible stage. The average number of people that an infected person infect during his or her infectious period is the reproductive number and the mean number of affected individuals is the outbreak size. These transitions, for both susceptible and infected persons evolve over time. Such complex process can be described mathematically through what is called the continuous time stochastic processes. (b) Radioactivity Radioactive atoms are unstable and disintegrate stochastically. Each of the new atoms is also unstable. By the emission of radioactive particles these new atoms pass through a number of physical states with specified decay rates from one state to the adjacent. Thus radioactive transformation can be modeled as birth process. (c) Communication Suppose that calls arrive at a single channel telephone exchange such that successive calls arrivals are independent exponential random variables. Suppose that a connection is realized if the incoming call finds an idle channel. If the channel is busy, then the incoming call joins the queue. When the caller is through, the next caller is connected. Assuming that the successive service times are independent exponential variables, the number of callers in the system at time t is described by a birth and death process. 6 CHAPTER TWO TRANSITION PROBABILITIES FOR A GENERAL PURE BIRTH PROCESS 2.1 Introduction A stochastic process N t :t > 0 is a collection of random variables, indexed by the variable t (which often represents time). A counting process is a stochastic process in which Nt must be a non – negative integer and for t > s, N t Ns . Our interest is in the variable N t - Ns . We refer to this variable as an increment in the interval s, t . Consider the set of events N j where N j is the jth state. The probability of moving from state N j to state N k is denoted by p jk and is called the transitional probability jk. Further, p jk = Prob Nk k| N j j Definition 1 A stochastic process has stationary increment if the distribution of N t - Ns for t s depend only on the length of the interval. Remark One consequence of stationary increments is that the process does not change over time. Some researchers call this process time homogeneous also. Definition 2 A stochastic process has independent increments if increments for any set of disjoint intervals are independent. We shall examine counting processes that are Markovian. A loose definition is that for t s, the distribution N t - Ns given Ns is the same as if any of the values Ns , Nu1 , Nu2 , ... with all u1 , u 2 , ... s were given. For a Markovian counting process, the probabilities of greatest interest are the transitional probabilities given by pk, k+n s, t = Prob N t - Ns = n / N t = k (2.1) for 0 s t . 7 The marginal distribution of the increment N t - Ns may be obtained by an application of the law of total probability. That is Prob N t - Ns = n = Prob N - Ns = n / Ns = k Prob Ns = k t k=0 = p k, k+n s, t pk s (2.2) k=0 2.2 Chapman-Kolmogorov Equations “Feller (1968), states that the transition probabilities of time homogenous Markov process satisfy the Chapman – Kolmogorov equation. pi n s + t = p s p t iv vn (2.3) v The transitional probabilities of time non-homogenous Markov process satisfies the ChapmanKolmogorov equation, namely pi n τ, t = p τ,s p s, t iv vn (2.4) v and is valid for s t. This relation expresses the fact that a transition from the state Ei at epoch(time) to En at epoch t occurs via some state Ev at the intermediate epoch s, and for Markov process the probability Pv n s, t of the transition from Ev to En is independent of the previous state Ei. The transition probabilities of Markov processes with countably many states are therefore solutions of the Chapman-Kolmogorov identity (2.4) satisfying the side conditions pi k , t 0, pi k ,s 1 ” v 8 (2.5) 2.2.1 Time Homogeneous Markov Process Proof of formulae (2.3) The transition probability pi n s + t = the probability of moving from state E i at time 0 to state E j at time s + t, passing via some state E v in time s. Prob X s + t = j, X t = v / X 0 = i = v = Prob X s + t = j, X t = v, X 0 = i Prob X 0 = i v = v Prob X s + t = j/ X t = v, X 0 = i Prob X t = v / X 0 = i Prob X 0 = i * Prob X 0 = i Therefore pi n s + t = Prob X s + t = j/ X t = v, X 0 = i Prob X t = v / X 0 = i v Because of Markov property, pi n s + t = Prob X s + t = j/ X t = v Prob X t = v / X 0 = i v The equation above can also be written in the form pi n s + t = p tp t vj iv v = p t p t * iv vj v 9 2.2.2 Time Non-homogeneous Markov Process Proof of formula (2.4) Consider a transition from the state E i at epoch (time) to state E j at epoch t occurs via some state E v at the intermediate epoch s. Diagrammatically, Time s t State Ei Ev Ej Prob X τ, t = n, X τ,s = v / X τ = i pi n τ, t = v = Prob X τ, t = n, X τ,s = v, X τ = i Prob X τ = i v = Prob X τ, t = n / X τ,s = v, X τ = i Prob X τ,s = v, X τ = i Prob X τ = i v = Prob X τ, t = n / X τ,s = v, X τ = i Prob X τ,s = v / X τ = i v Therefore pi n τ, t = Prob X τ, t = n / X τ,s = v Prob X τ,s = v / X τ = i v = P s, t P τ,s = P τ,s P s, t vn v iv iv vn v 10 2.3 Forward and Backward Kolmogorov Differential Equations 2.3.1 Forward Kolmogorov Differential Equation Using notations by Klugman, Panjer and Withalt (2008) replace i by k and n by k + n. Therefore, pk,k+n τ, t = p τ,s p kv s, t v,k+n v Next, replace by s, s by t and t by t h. Thus we have pk,k+n s, t + h = p s, t p kv v,k+n t, t + h v Finally, replace v by k + j pk,k+n s, t + h = n p k,k j s, t pk j,k+n t, t + h (2.6) j=0 This can also be re-written as n -2 pk,k+n s, t + h = p k,k + j s, t p k + j,k+n t, t + h + p k,k +n -1 s, t p k +n -1,k+n t, t + h + p k,k +n s, t p k + n,k+n t, t + h j= 0 (2.7) But for a pure birth process pk,k + 1 t, t + h = λk t h + o h ; pk k t, t + h = 1 - λ k t h + o h ; k = 0, 1, 2, ... k = 0, 1, 2, ... (2.8) (2.9) and pk,k + j t, t + h = o h ; j = 2, 3, ... , n (2.10) Apply (2.8), (2.9) and (2.10) in (2.7) to get n -2 pk,k+n s, t + h = o h p k,k + j s, t + λ k +n -1 t h + o h pk,k +n -1 s, t + 1- λ k +n -1 t h + o h p k k +n s, t j= 0 11 Therefore, lim p k k + n s, t + h - p k k + n s, t h 0 oh h 0 h lim h n-2 p k,k + j = λ k + n -1 t h + o h - λk +n t h + o h + lim p k,k + n - 1 s, t + lim p k,k + n s, t h 0 h 0 h h s, t j=0 i.e., pk,k + n s, t + λ k +n t pk,k + n s, t = λ k +n -1 t p k,k + n - 1 s, t t (2.11) where pk,k - 1 s, t = 0 In general (2.12) pij s, t + λ j t pij s, t = λ j-1 t pi j-1 s, t where p j j - 1 s, t = 0. t For this non-homogeneous birth process, the initial conditions are: pk k s,s = 1 and pk,k + n s,s = 0 for n > 0. (2.13) 2.3.2 Backward Kolmogorov Differential Equations Backward Kolmogorov Differential Equations Consider the following diagram pi j s, t = Time s s+h t State Ei Ev Ej p s,s h p s h, t ik kj k But for a pure birth process pi,i +1 s,s + h = λi s h + o h pi i s,s + h = 1 - λi s .h + o h And pi k s,s + h = o h ; for k i 2 12 Therefore, pi j s, t = pii s,s h pi j s + h, t + pi,i s,s h pi+1, j s + h, t j p s,s h p s + h, t k =i 2 ik = 1 - λi s .h + o h pi j s + h, t + λ i s h + o h pi+1, j s + h, t kj j o h p s + h, t kj k =i 2 Therefore, pi j s, t - pi j s + h, t = - λi s .h + o h pi j s + h, t + λ i s h + o h pi +1, j s + h, t + j o h p s + h, t kj k =i+2 pi j s, t - pi j s + h, t pi j s, t = lim h 0 s h = - λi s pi j s, t + λi s pi +1, j s, t (2.14) NB. The duration s, t > duration s + h, t This is the Kolmogorov Backward Differential equation. 2.4 Methods of Solving Continuous Time Non-homogenous Forward Kolmogorov Differential Equations Equation (2.11) can be solved using the following techniques. In Chapter three, we shall use the integrating factor method. In chapter four, we shall use the Lagrange’s Method. In chapter five, we shall use the Matrix method. Highlights of the key steps in each of these three methods are given below. Some comments on the Laplace Method have also been given even though it has not been used. 2.4.1 Integrating Factor Technique Let a differential equation be of the form dy ay R dx (2.15) where y is a function of x and R is a constant or a function of x. To solve for y in this kind of differential equation, given the initial conditions, we use the following procedure. 13 Procedure a dx Let the integrating factor IF e Multiply both sides of the equation (of the form (2.15)) you want to solve with the integrating factor. a dx d a dx e y R e dx Express the new equation in the form Integrate both sides with respect to x. Use the initial condition to find the constant of integration. Find y. 2.4.2 Lagrange’s Method Let P, Q and R be functions of x, y and z. suppose we have an equation of the form P dz dz Q R dx dy (2.16) subject to some appropriate boundary conditions. Such an equation is called a linear partial differential equation. The Lagrange method of solving this kind of equation is described below. The probability generating function (pgf) is one of the major analytical tools used to work with stochastic processes on discrete state spaces. Definition Let X be a non negative integer valued random variable such that P[X = k] = pk k= 0,1,2,…is the probability mass function (pmf). Then the probability generating function is given by G(s) = E SX = pksk k=0 If we are able to obtain the pgf of a stochastic process, then we can find the pmf of the process. The pmf p k is the coefficient of s k Procedure Define G s,t,z = p n =0 jk (s, t) z n p jk (s, t) z n . Consequently define k= j G s, t, z . z 14 G s, t, z and s Multiply both sides of the second differential equation by s n and sum over n, and taking advantage of the initial conditions, write the resulting equation in terms of the definitions above. Summarize the results by a single Lagrange Partial differential equation for a generation function of the form P G s, t, z Q G s, t, z R. t z Form the resulting auxiliary equations from the equation above. It will be of the form; t z G s, t, z P Q R We can form three subsidiary equations from the equation above. These are t z P Q i t G s, t, z P R ii z G s, t, z Q R iii Consider any two equations and solve them Solutions of the two considered subsidiary equations are in the form U s, t, z Constant (2.17) V s, t, z Constant (2.18) and The most general solution of (2.16) is now given by u = ψ v where ψ is an arbitrary function. The precise form of this function is determined when the boundary conditions have been inserted. 2.4.3 Laplace Transforms L f t = e- s t f t dt (2.19) 0 Using integration by parts v du = uv - u dv (2.20) 15 Let v = e- s t dv = - s e- s t dt Also, let du = f t u = f t Substituting in equation (2.20) - st - st - st e f t dt = f t .e 0 - f t .-s e dt 0 0 = f t .e- s t + s f t . e- s t dt 0 0 - st e f t dt = 0 - f 0 + s f t . e dt - st 0 0 = s f t . e- s t dt - f 0 0 Thus L f t = s L f t - f 0 Substituting f t with pij t , the equation above becomes L pi j t = s L pi j t - pi j 0 (2.21) Procedure Take the Laplace transform of the second of the two basic difference equations. Apply the relation L pi j t = s L pi j t - pi j 0 to replace L pi j t and simplify leaving L pi j t as the subject of the formula. Starting with the conditions at t = 0, generate a recursive relation and use it to generalize for L pi j t . Find the Laplace inverse of the L pi j t so got. 16 2.4.4 Generator Matrix Method Express the differential equation in Matrix form. Obtain the eigen values and the corresponding eigen vectors. If has distinct eigen values μ1 ,μ 2 ,...,μ k , say, then UDV where V = U -1 and D diag μ1 ,μ 2 ,...,μ k and the ith column of U is the right eigen vector associated with μ i . Work out p t U diag e1t , ..., ek t V Substitute n in the equation - j+ v λ t -s e n-1 n p j j + n s, t = j + k λ n n and simplify. k=0 v=0 j + k λ n - j + v λ n k=0 kv 17 CHAPTER THREE TRANSITIONAL PROBABILITIES BASED ON THE INTEGRATING FACTOR TECHNIQUE 3.1 Introduction In this chapter, we shall solve equation (2.11) using the integrating factor technique. We shall solve this equation for all the four pure birth processes starting with Poisson Process, then the Simple Birth process, the Simple Birth Process with immigration and finally the Polya Process. By putting n = 0 in equation (2.11), we shall find pk k s, t . Putting other higher values of n in the same equation will generate a recursive relation. Each generated equation will be solved using the integrating factor technique. We shall then generalize each case by induction. 3.2 Determining pk k s, t When n = 0, equation (2.11) becomes pk k s, t + λ k t pk k s, t = λ k -1 t pk,k - 1 s, t t But from (2.12), pk,k - 1 s, t = 0. Therefore p k k s, t + λ k t p k k s, t = 0 t 1 d p k k s, t = - λ k t p k k s, t dt log Pk k s, t = - λ k t t Therefore, t log pk k s, t = - λ k x dx t s s Equivalently log pk k s, t p k k s,s t = - λ k x dx s 18 Therefore t log p k k s, t = - λ k x dx s t p k k s, t = e - λ k x dx (3.1) s t For n 0, use the integrating factor e - λk s +n t dt in equation (2.11). We shall now look for special cases. 3.3 Special Cases of Non-Homogeneous Markov Processes. 3.3.1 Poisson Process λ k = λ for all k. t pk k s, t = e - λ dx s =e - λ t - s (3.2) For n 0, the differential equation for the transition probabilities is pk,k+n s, t + λ pk,k n s, t = λ pk,k n - 1 s, t t When n = 1, -λ t -s pk,k +1 s, t + λ pk,k + 1 s, t Pk k s, t = λ e t λ dt Integrating factor = e et Therefore, e t -λ t -s p k,k +1 s, t + λ et p k,k + 1 s, t λ et e t λt e p k,k + 1 s, t = λ eλs t 19 (3.3) Therefore, t t s s λy λs e pk,k + 1 s, y λ e y t e λy p k,k + 1 s, y λ eλs y t s s Therefore eλt p k,k + 1 s, t - e λs p k,k + 1 s,s = λeλs t - s eλt p k,k + 1 s, t - 0 = λ t - s eλs Equivalently pk,k + 1 s, t = λ t - s e- λ t - s (3.4) When n = 2, equation (2.11) becomes -λ t -s pk,k + 2 s, t + λ pk,k + 2 s, t = λ 2 t - s e t Integrating Factor et . Multiplying the above equation by the Integrating factor, we have eλt -λ t -s p k,k + 2 s, t + λ eλt p k,k + 2 s, t = λ 2 t - s e λt e t λt e p k,k + 2 s, t = λ 2 t - s eλs t Integrating both sides t eλy pk,k + 2 s, y = λ 2 y - s eλs dy t s s Therefore, t y2 e p k,k + 2 s, t - e p k,k + 2 s,s = λ e - sy 2 s λt λs 2 λs t 2 s2 = λ 2 eλs - st - - s 2 2 2 20 t2 s2 e λt p k,k + 2 s, t - e λs p k,k + 2 s,s = λ 2 eλs - st + s2 2 2 = λ 2 e λs 2 t - 2st + s 2 2 λ 2 e λs 2 = t - s 2 Equivalently, using (2.21) λ t - s eλs e p k,k + 2 s, t = 2! 2 λt Therefore, pk,k + 2 s, t = e - λ t s λ t - s 2! 2 (3.5) By induction, assume λ t - s n - 1! n -1 pk,k + n - 1 s, t = e - λ t - s Then equation (2.11) becomes λ t - s -λ t -s pk,k + n s, t + λ pk,k + n s, t = λ e t n - 1! n -1 Integrating factor et Multiplying the above equation by the integrating factor, we get λ t - s -λ t -s e p k,k + n s, t + λ et p k,k + n s, t = λ e e t t n - 1! n -1 t λ t - s t e p k,k + n s, t λ es t n - 1! n -1 Integrating both sides et p k,k + n s, t t s Let u = y - s λ n es n -1 y - s y n - 1! s t du =1 or equivalently du = dy.When y = s, u = 0.When y = t, u = t -s. dy 21 Then λ n e λs n - 1! e λt p k,k + n s, y = t s t -s u n -1 du 0 t -s λ n e λs u n = n - 1! n 0 = λ n e λs n t - s n n - 1! λ t - s =e n! n λs Therefore, λ t - s e p k,k + n s, t - e p k,k + n s,s = e n! λt λt λ t - s e p k,k + n s, t = e n! λt n λs n λs Therefore λ t - s ; n = 0, 1, 2, ... n! n pk,k + n s, t = e - λt - s (3.6) This is independent the initial state and depends on the length of the time interval, thus for a Poisson processes the increments are independent and stationary. Remarks; Formula (3.6) does not depend on k, which reflects the fact that the increments are independent. From (2.2), the marginal distribution of the increment N t - Ns is thus given by Prob N t - N s = n = p s, t p k s k,k + n k=0 λ t - s pk s n! n = e - λ t - s k=0 =e - λ t - s λ t - s n! 22 n p s k k=0 Therefore, λ t - s ; n = 0, 1, 2,... n! n Prob N t - Ns = n e - λ t - s (3.7) Which depends only on t - s. Therefore, the process is stationary. Thus, the poisson process is a homogeneous birth process with stationary and independent increments. Note that Prob N t - N 0 = n = e - λt λt n n! Since Nt is the number of events that have occurred up to time t, it is convenient to define N0 = 0. Therefore, Prob N t = n = e - λt λt n n! i.e. pn t = e - λt λt n n! ; n 0, 1, 2, ... (3.8) Nt has a Poisson distribution with mean t. 3.3.2 Simple Birth Process λ n = kλ, for k = 0, 1, 2, ... For n = 0, (2.11) becomes (3.1). Therefore, t pk k s, t = e - kλ dx s =e - kλ t - s (3.9) For n 0, (2.11) becomes pk,k + n s, t + k n λ pk,k + n s, t = k n 1 λ p k,k + n - 1 s, t t For n = 1, - kλ t - s pk,k + 1 s, t + k 1 λ pk,k + 1 s, t = kλ pk k s, t kλ e t 23 (3.10) Integrating factor e k 1dt e k 1t Multiplying the equation above by the integrating factor, we have e R.H.S = kλe k 1t k + 1λt e k 1 t k 1 t - kλ t - s pk,k + 1 s, t + k 1 e λ pk,k + 1 s, t = k λ e e t - kλ t - s = kλ×ekλt ×eλt ×e- kλt ×ekλs kλ×eλt ×ekλs kλ×e λ t ks Therefore, k + 1λt e pk,k + 1 s, t = kλeλt ekλs t Integrating both sides with respect to t, we have e k + 1 λy t p k,k + 1 s, y = kλeλk s eλy dy t s s t = kλe = λk s e λy λ s kλeλk s λt e - e λs λ = ke λk s e λt - e λs Therefore, p k,k + 1 s, t = ke λk s e λt - k + 1 λt -e λs - k + 1λt = keλk s e-kλt - eλs - kλt - λt p k,k + 1 s, t = ke λk s e-kλt 1 - eλs - λt = ke = ke = ke λk s - t 1 - e λs - λt λk s - t 1 - e- λ t - s - λk t - s 1 - e- λ t - s k - λk t - s -λ t -s = e 1 - e 1 24 (3.11) For n 2 - λk t - s -λ t -s pk,k + 2 s, t + k + 2 λ pk,k + 2 s, t = k + 1 λ pk,k + 1 s, t k + 1 λ ke 1 - e t λ k + 2dt λ k+2 t Integrating factor = e =e Multiplying the above equation by the integrating factor, we have e λ k + 2 t λ k+2 t λ k + 2 t - λk t - s -λ t -s pk,k + 2 s, t + λ k + 2 e pk,k + 2 s, t = λk k + 1 e e 1 - e t This can also be written as, λ k + 2 t e p k,k + 2 s, t = λk k + 1 e λkt + 2λt- λkt + λks 1 - e- λ t - s t = λk k + 1 e 2λt + λks 1 - e- λ t - s = λk k + 1 e λks e 2λt - e λs e λt Integrating both sides with respect to t, we have e λ k + 2 t p k,k + 2 s, t = λk k + 1 e t λks e 2λy - e λs e λy dy s = λk k + 1 e t λks e 2λy eλs eλy 2λ - λ s e2λt e 2λs = k k + 1 e λks - e λs e λt + e λs e λs 2 2 Therefore, 1 1 pk k + 2 s, t = k k + 1 eλks e2λt - kλt - 2λt - eλs + λt - kλt - 2λt - e2λs- λkt - 2λt + e2λs- kλt - 2λt 2 2 Simplifying, 1 - λ t - s 1 - 2λ t - s - 2λ t - s p k k + 2 s, t = k k + 1 e λks e- kλt - e - e + e 2 2 1 - 2λ t - s 1 - λ t -s = k k + 1 e λks e- kλt - e + e 2 2 = k k + 1 2 e - λk t s e- 2λ t - s - 2e- λ t - s + 1 25 Simplifying, we have pk k + 2 s, t = 2 k k + 1 - λk t s 1 - e- λ t - s e 2 Equivalently, 2 k + 1 - λk t s 1 - e- λ t - s pk,k + 2 s, t = e 2 (3.12) Put n = 3, then (2.11) becomes p k,k + 3 s, t + λ k +3 t p k,k + 3 s, t = λ k + 2 t p k,k + 2 s, t t p k,k + 3 s, t + k 3 λ p k,k + 3 s, t = k 2 λ p k,k + 2 s, t t Integrating factor = exp k + 3 λ dt = exp k + 3 λt Multiplying the above equation by the integrating factor, we have 2 k + 1 k + 3λt - λk t - s k + 3 λt 1 - e- λ t - s pk,k + 3 s, t + k + 3 λe pk,k + 3 s, t = k + 2 λ e e t 2 This can also be written as e k + 3 λt 2 k + 2 kλt + 3λt - λkt + λks k + 3λt = 3λ 1 - e- λ t - s e p s, t e k,k + 3 t 3 2 k + 2 3λt λks k + 3λt = 3λ 1 - e- λ t - s e p s, t e k,k + 3 t 3 Integrating both sides with respect to t, we have e k + 3 λy p k,k + 3 s, y t s 2 k + 2 λks t 3λy - λ y - s = 3λ e e 1 e s dy 3 k + 2 λks t λy λy λs 2 = 3λ e e e - e dy 3 s Put u = eλy -eλs du = λeλy du = λeλy dy. When y =s, u = 0. When y t, u eλt -eλs . dy 26 Therefore e k + 3 λy t 2 k + 2 λks t λy λy p k, k + 3 s, y = 3 e λe e - e λs dy s 3 s k + 2 λks eλt - eλs λy 2 du = 3 λe u e 0 λe λy 3 k + 2 λks eλt - eλs 2 = 3 u du e 0 3 e λt - e λs k + 2 λks u 3 = 3 e 3 3 0 k + 2 λks λt λs 3 = e e - e 3 3 k + 2 λks λt - λ t - s = e e 1 e 3 3 k + 2 3λt λks - λ t - s = e e 1 - e 3 But pk,k + 3 s,s 0. Therefore, e k + 3 λt 3 k + 2 3λt λks - λ t - s pk,k + 3 s, t = e e 1 - e 3 The above equation can also be written in the form 3 k + 2 3λt λks - kλt - 3λt - λ t - s pk,k + 3 s, t = e e e e 1 - e 3 3 k + 2 - λk t - s 1 - e- λ t - s = e 3 (3.13) By induction, assume n -1 k + n - 2 - λk t - s 1 - e- λ t - s to be true. pk,k + n - 1 s, t = e n -1 (3.14) Then equation (2.11) becomes n -1 k + n - 2 - λk t - s 1 - e- λ t - s pk,k + n s, t + λ k +n t pk,k + n s, t = λ k +n -1 t e t n -1 27 Substituting for λ k +n t , we have k + n p k,k + n s, t + k + n λ p k k + n s, t = k + n -1 λ t n -1 n -1 2 - λk t - s 1 - e- λ t - s e n -1 k + n - 1 - λk t - s 1 - e- λ t - s = nλ e n Integrating Factor = e k + n λt Multiplying the equation above with the integrating factor, we have n -1 k + n - 1 k + n λt - λk t - s k + n λt 1 - e- λ t - s pk k + n s, t + k + n λ e pk k + n s, t = e nλ e n t Equivalently, e k + n λt n -1 k + n - 1 k + n λt - λ t - s kλt + nλt - λkt + λks e p s, t = nλ e 1 e k,k + n n t k + n - 1 λks nλt - λt λs n - 1 = nλ e e 1 - e e n k + n - 1 λks nλt e λs = n λ e e 1 - λt n e n -1 n -1 k + n - 1 λks nλt -λt n - 1 λt e - e λs = n λe e e n n -1 k + n - 1 λks + nλt - λtn + λt λt e - eλs = n λe n k + n - 1 λt λt λs n - 1 = n e λks λ e e - e n Integrating both sides with respect to t, we have e k + n λy Put u eλy -eλs pk,k + n s, y = n e t s λks k + n - 1 t λt λy λs n - 1 λ e e - e dy n s du du eλy or dy λy . Further, when y s, u 0and when y t, u eλt - eλs . dy e 28 Therefore, e k + n λy t k + n - 1 eλt - eλs λt n - 1 du pk,k + n s, y = n eλks λe u 0 s n λ eλt t k + n - 1 eλt - eλs n - 1 e k + n λy p k,k + n s, y = n e λks u du 0 s n eλt - e λs = ne λks k + n - 1 u n n n 0 k + n - 1 λt λs n = e λks e - e n n k + n - 1 nλt - λ t - s = e λks e 1 - e n Therefore, e k + n λy t n k + n - 1 nλt -λ t -s p k,k + n s, y = e λks e 1 - e s n e k + n λt p k,k + n s, t - e k + n λs n k + n - 1 nλt - λ t - s p k,k + n s,s = e λks e 1 - e n n k + n - 1 nλt - λ t - s e k + n λt p k,k + n s, t e λks e 1 - e n Therefore, n k + n - 1 - λk t - s 1 - e- λ t - s pk,k + n s, t = e n (3.15) This is a Negative binomial distribution with p = e- λ t - s and q = 1 - e- λ t - s It depends on the length of the time interval t - s and on k and is thus stationary and not independent. 29 3.3.3 Simple Birth Process With immigration For simple birth with immigration λ k = kλ + v t p k k s,t = e =e =e - kλ + v dx s - kλ + v x t s - kλ + v t - s For n 0, (2.11) becomes pk,k + n s, t + k + n λ + v pk,k + n s, t = k + n - 1 λ + v p k,k + n - 1 s, t (3.16) t When n 1, equation (3.16) becomes pk,k + 1 s, t + k + 1 λ + v pk,k + 1 s, t = kλ + v pk k s, t t - kλ + v t - s pk,k + 1 s, t + k + 1 λ + v pk,k + 1 s, t = kλ + v e t (3.17) To integrate equation (3.17), k + 1 λ + v dt Integrating factor = e k + 1 λ + v t = e Multiplying equation (3.17) with the integrating factor, we have k + 1 λ + v t e k + 1 λ + v t - kλ + v t - s k + 1λ + v t pk k + 1 s, t + k +1 λ + v e pk k + 1 s, t = kλ + v e e t Equivalently, k + 1λ + v t kλ + v s λt e p k k + 1 s, t = kλ + v e e t e k + 1 λ + v y t p k k + 1 s, y = kλ + v e kλ + v s s e k + 1 λ + v y t p k k + 1 s, y = kλ + v e t e λy dy s kλ + v s s 30 e y t s From (2.13), pk k 1 s,s 0. Therefore, k + 1 λ + v t e kλ + v kλ + vs t pk k + 1 s, t = e es e Therefore, kλ + v kλ + v s λt - k + 1 λ + v t p k k + 1 s, t = e - e λs e e λ kλ + v kλ + v s λt - k + 1 λ + v t = e - e λs e e λ v = k + e kλ + v s - t e- λt e λt - e λs λ v = k + e- kλ + v t - s e- λt 1 - e-λ t - s λ v v -λ k + t - s = k + e λ e- λt 1 - e-λ t - s λ (3.18) When n = 2, p k k + 2 s, t + k + 2 λ + v p k k + 2 s, t = k + 1 λ + v p k k + 1 s, t t v v -λ k + t - s -λ t - s = k + 1 λ + v k + e λ e- λt 1 - e λ k + 2 λ + v dt Integrating factor = e k + 2 λ + v t e Multiplying equation (3.18) by the integrating factor, we have k+2 λ+v t e v v -λ k+ t -s k+2 λ+v t k+2 λ+v t pk, k+2 s, t +e k+2 λ+v pk, k+ 2 s, t = k+1 λ+v k+ e λ e-λt e 1-e- λ t-s t λ Equivalently, k+2λ+v t v k v s 2λt -λ t-s e pk,k+2 s, t k+ k+1 λ+v e e 1-e t λ 31 Integrating both sides with respect to t, we have k + 2 λ + v t e p k k + 2 s, t t s v = k + k +1 λ + v e kλ + v s e 2λy - e λs e λy dy λ s t 2λy e λs e λy t v kλ + v s e = k + k +1 λ + v e λ λ s 2λ e 2λt e λs e λt v kλ + v s = k + k +1 λ + v e λ λ 2λ e 2λs e λs e λs - λ 2λ e 2λt e λs e λt e 2λs v e 2λs = k + k +1 λ + v e kλ + v s + λ λ 2λ λ 2λ v k + +1 λ kλ + v s 2λt v λ s+t = k+ e e - 2 e - e 2λs + 2e 2λs λ 2 v k + +1 λ kλ + v s 2λt v λ s+t = k+ e e - 2 e + e 2λs λ 2 Now, from (2.13), pk,k 1 s,s 0. Therefore, v k + +1 λ kλ + v s 2λt v k + 2 λ + v t λ s+t e p k,k + 2 s, t = k + e e - 2e + e 2λs λ 2 Equivalently, v v k + k + λ +1 λ kλ + v s - k + 2λ + v t 2λt p k,k + 2 s, t = e e e - 2 e λs + t + e 2λs 1 2 v k+ λ = 1 v k + +1 - kλ + v t - s -2λt 2λt e e e - 2 e λs + t + e 2λs 2 v k+ λ = 1 v k + λ +1 - kλ + v t - s -λ t - s -2λ t - s e 1 - 2e + e 2 v v k + k + λ +1 λ - kλ + v t - s -λ t -s = e 1- e 1 2 32 2 Therefore, v v k + k + λ +1 - k + v t - s λ -λ t -s pk,k + 2 s, t = e 1- e 1 2 2 (3.19) When n = 3, equation (3.16) becomes d p k,k + 3 s, t + k +3 λ + v p k,k +3 s, t = k + 2 λ + v p k,k + 2 s, t dt = k + 2 k+ λv k+ λv +1 λ + v 1 2 e 1-e -λ k + λv t - s - λ t - s 2 Integrating this equation, k +3 λ + v dt Integrating factor = e k +3 λ + v t = e k+3 λ+v t e k+ λv k+ λv +1 ek +3λ + v t e- λ k + λv t -s 1-e- λ t -s k +3 λ + v t pk,k +3 s, t + k+3 λ+v e pk, k+3 s, t = k + 2 λ + v t 1 2 k+ λv k+ λv +1 e k +3λ + v t e-λ k + λv t -s 1- e- λ t -s k+3λ+v t e p k,k +3 s, t = k + 2 λ + v t 1 2 = k+ λv k+ λv +1 kλ +v s e3λt 1- 2e = k+ λv k+ λv +1 kλ +v s e 1 k + 2 λ + v e 2 1 k + 2 λ + v e 2 3λt - λ t -s +e 2 - 2λ t -s - 2e λs e 2λt + e 2λs e λt Integrating, we have e k+3 λ+v y p k,k +3 s, y k+ λv k+ λv +1 = t 1 s k + 2 λ + v e 2 t kλ +v s k+ λv k+ λv +1 k + λv + 2 kλ +v s = e 1 2 3 e3λy 2eλs e2λy e2λs eλy 3λ - 2λ + λ s t e3λy -3e λs e 2λy + 3e 2λs e λy s Therefore, k+3 λ+v y e t p k, k + 3 s, y = k+ λv k+ λv +1 k + λv +2 e kλ +vs 1 s = 2 3 k+ λv k+ λv +1 k + λv +2 e kλ +vs 1 2 3 33 e3λx -3e λs e 2λx +3e 2λs e λx - e3λs -3e λs e 2λs +3e 2λs eλs e3λx -3e λs e 2λx +3e 2λs e λx -e3λs +3e3λs -3e3λs 2 Simplifying, k+3 λ+v t e t pk,k +3 s, t = k+ λv k+ λv +1 k + λv + 2 e kλ +vs 1 s 2 3 e3λx - 3eλs e2λx + 3e2λseλx - e3λs Now, from (2.13), Pk,k 1 s,s 0. Therefore, e k+3 λ+v t pk,k +3 s, t = k+ λv k+ λv +1 k + λv + 2 e kλ +vs 1 2 3 e3λx - 3eλse2λx + 3e2λseλx - e3λs Equivalently, p k,k +3 s, t = = k+ λv k+ λv +1 k + λv + 2 e kλ +vs 1 2 3 - k+3 λ+v t e k+ λv k+ λv +1 k + λv + 2 e- kλ +v t - s 1 - 3e-λ t - s + 3e-2λ t - s - e3λ t - s 1 2 3 k+ λv k+ λv +1 k + λv + 2 - kλ +v t - s = e 1 = 1 - 3e λs e -λt + 3e 2λs e -2λx - e3λs e -3λt 2 3 k+ λv k+ λv +1 k + λv + 2 3! e - kλ +v t - s 1 - e 1 - e k+ λv +1 k + λv + 2 ! - kλ +v t - s -λ t - s = e 1- e v 3! k+ λ - 1! k + λv + 2 - kλ +v t - s -λ t - s = 1- e e 3 -λ t - s -λ t - s 3 3 3 3 (3.20) By induction, assume k + λv + n pk,k +n - 1 s, t = n -1 2 - kλ +v t - s -λ t - s 1- e e n -1 (3.21) From (3.16), we have for n > 0, p k,k+n s, t + k + n λ + v p k,k + n s, t = k + n -1 λ + v p k,k + n -1 s, t t v v k + λ + n - 2 - kλ +v t - s -λ t - s = k + + n -1 1-e e n 1 34 n -1 k + n λ + v dt Integrating factor = e k + n λ + v t = e Thus, v k + n λ + v t = k + n -1 λ + v k + λ + n - 2 e k + n λ + v t e- kλ +v t - s 1- e-λ t - s p k k+n s, t e t n-1 k + λv + n - 2 k + v + n s nt es = λ k + + n -1 e 1 - t e n-1 e n -1 n -1 v λ k + λv + n - 2 k + λv + n λs nt - n -1λt λt λs n - 1 = λ k + λv + n -1 e e e - e e n-1 k + λv + n - 2 k + λv + n λs t λt λs n - 1 = λ k + λv + n -1 e e - e e n-1 Integrating both sides, we have k +n λ + v y pk,k+n s, y e k + λv + n - 2 k + λv +n λs t y λy λs n - 1 = λ k + λv + n -1 e s e e - e dy s n- 1 t Let eλy - eλs u. Then du -λy du e . Also, when y =s, u = eλs -eλs = 0 and when ey or dy = λ dy y = t, u = eλt -eλs . Therefore, e k + n λ + v t k + λv + n - 2 k + λv + n λs v p k,k+n s, t = λ k + λ + n -1 e n-1 k + λv + n - 2 k + λv + n λs = k + + n -1 e n-1 v λ eλt -e λs λy e u n -1 0 e-λy du λ eλt -e λs u n - 1du 0 eλt -eλs k + λv + n - 2 k + λv + n λs u n = k + + n -1 e n n-1 0 v λ k + v + n λs k + λv + n - 2 e λ λt λs n = k + + n -1 e - e - 0 n 1 n v λ k + v + n λs n k + λv + n - 2 e λ = k + + n -1 e λt - e λs n n-1 v λ 35 Equivalently, p k,k+n s, t = k + k + λv + n - 2 ! e k + + n λs - k + n λ + v t λt λs n + n -1 e e - e n - 1! k + λv - 1! n v λ v λ k + λv + n -1! - k + + n t - s λ λt λs n = e e - e n! k + λv - 1! v λ k + λv + n -1 - k + λv + n t - s λ -λ t - s = 1- e e n n Therefore, k + λv + n -1 - k + λv +n t - s λ -λ t - s pk,k+n s, t = 1-e e n n (3.22) This is a Negative binomial distribution with p = e- λ t - s and q = 1 - e- λ t - s It depends on the length of the time interval t - s and on k and is thus stationary and not independent. 3.3.4 Polya Process 1 + ak For the Polya process, λ k = λ 1 + λat From equation (3.1), we have t p k k s,t = e 1 + ak - λ dx 1 + λax s t =e 1 - 1 + ak λ dx 1 + λax s Let u = 1 + λax du du a or dx dx a Limits x = s u = 1 + λas x = t u = 1 + λat Therefore, 1+ λat pk k s,t = e du 1 + ak - λ aλ 1+ λas u 36 p k k s,t = e =e =e 1+ λat 1 + ak - λ ln u 1+ λas aλ 1 1 + λat - + k ln a 1+ λas 1 + λas ln 1+ λat k + 1 a 1 + λas = 1+ λat k+ 1 a For n ≥ 1, equation (2.11) becomes 1 + a k n 1 + a k n 1 pk k + n s, t + λ pk k + n s, t = λ pk k + n - 1 s, t t 1 + λat 1 + λat For n = 1 1 + a k + 1 1 + ak pk k + 1 s, t + λ pk k + 1 s, t = λ p k k s, t t 1 + λat 1 + λat 1 + ak 1 + λas = λ 1 + λat 1+ λat 1 + ak 1 + λas = 1 + λat 1+ λat Integrating factor = e 1 a k+ k+ 1 a λ 1 + a k + 1 dt 1 + λat λ k + 1a + 1 = 1 + λat Multiply the equation above by the integrating factor, we obtain k + 1 + 1 k + 1 + 1 1 + ak 1 + λas 1 + λat a p k k + 1 s, t = 1 + λat a t 1 + λat 1+ λat = 1 + ak 1 + λas k+ 1 a λ Integrating, we have 1 + λay k+ 1 a +1 t pk k + 1 s, y 1 + ak 1 + λas k+ s 1 + λat k+ 1 a + 1 pk k + 1 s, t = 1 + ak 1 + λas k+ 37 1 a 1 a t λys λ t - s k+ 1 a λ (3.23) Therefore, 1 + ak 1 + λas p k k + 1 s, t = k + + 1 1 + λat 1 a k+ λ 1 a k + 1a 1 + λas = k + + 1 1 + λat k+ 1 a λ 1 a 1 + λas pk k + 1 s, t = k + 1 + λat k+ 1 a 1 a t - s t - s aλ t - s 1 + λat (3.24) When n = 2 1 + a k + 2 1 + a k + 1 p k k + 2 s, t + λ p k k + 2 s, t = λ p k k + 1 s, t t 1 + λat 1 + λat k + 1a + 1 = aλ k + 1 + λat = k + Integrating factor e 1 a k + 1 + λas 1 a 1 + λat + 1 1 + λas 1 a k+ 1 a aλ t - s 1 + λat aλ t - s k+ +2 1 + λat 2 k+ 1 a 1 a 1 + a k + 2 λ dt 1 + λat 1 at k 1 a 2 Multiply equation the above with the integrating factor, we have k+1 +2 1 + λat a p k k + 2 s, t = k + t = k + 1 a 1 a k + 1 a + 1 1 + λat k + 1a + 1 1 + λas k+ k+ 1 a 1 a +2 1 + λas aλ t - s k+ +2 1 + λat 2 k+ 1 a aλ t - s 2 Integrating, we have 1 + λay k+ 1 a +2 t pk k + 2 s, y = k + s 1 a k + 1 a + 1 1 + λas k+ 1 a aλ t 2 y - s dy s 38 1 a 1 + λat k+ 1 a +2 p k k + 2 s, t = k + 1 a k + + 1 1 + λas 1 a = k + 1 a k + 1a + 1 1 + λas = k + 1 a k + 1a + 1 1 + λas = k + 1 a = k + 1 a t k+ y2 aλ - sy 2 s 1 a 2 k+ 1 a aλ 2 k+ 1 a aλ 2 1 a k + 1 a + 1 1 + λas k+ k + 1 a + 1 1 + λas k+ aλ 2 2 1 a aλ 2 2 2 s2 t 2 - st - - s 2 2 t2 s2 st + 2 2 t 2 - 2st + s 2 t - s 2 Therefore, 2 k + 1a k + 1a + 1 1 + λas aλ 2 p k,k + 2 s, t = t - s k+ +2 2 1 + λat 2 k+ k + 1a k + 1a + 1 1 + λas λa t - s = k+ 1 a 1 a 1 a 2 1 + λat k + 1a + 1! 1 + λas = k + 1a - 1!2! 1 + λat k + 1a + 1 1 + λas = 2 1 + λat k+ k+ 1 a 1 a 1 + λat λa t - s 1 + λat λa t - s 1 + λat 2 2 (3.25) For n = 3 1 + a k + 3 1 + a k 2 pk,k + 3 s, t + λ pk k 2 s, t λ pk k + 3 s, t = t 1 + λat 1 + λat 1 + a k + 3 1+ a k + 2 k + 1a +1 1+ λas p k,k + 3 s, t + λ p s, t = λ kk+3 t 1 + λat 1+ λat 2 1+ λat k + 1a k + 1a +1 k + 1a + 2 1+ λas = k + +3 1+ λat 1 a 39 k+ k+ 1 a 1 a λa t -s 1+ λat λa 2 3 t -s 2 2 Integrating factor e 1 + a k + 3 λ dt 1 + λat k 1a 3 1 at Multiplying the equation above by the integrating factor, we obtain 1 1 1 k + 1a + 3 k + 1a +3 k + a k + a +1 k + a + 2 1+ λas p k,k + 3 s, t = 1+ λat 1 + λat k + 1 +3 t 1+ λat a = k + 1 a k + 1 a +1 k + + 2 1+ λas 1 a k+ 1 a λa 3 2 1 a k+ t -s λa 2 2 Integrating, we have k + 1a + 3 1 + λay t pk,k + 3 s, y = k + s 1 a k + 1 a +1 k + + 2 1+ λas 1 a k+ 1 a λa 3 t 2 y -s dy 2 s t We wish to integrate y -s dy 2 s Let u y s du 1 dy Limits When y s, u 0 and when y = t, u = t -s Therefore, t -s t - s 3 u3 s y -s dy = 0 u du = 3 = 3 - 0 0 t -s t 2 2 Thus, k + 1a + 3 1 + λay t pk,k + 3 s, y = k + s 1 a k + 1 a +1 k + + 2 1+ λas 1 a λa t - s 3 k+ 1 a 2 3 Therefore, k + 1a + 3 1 + λat pk,k + 3 s, t k + 1a k + 1a +1 k + 1a + 2 λa = 3 2 1+ λas Re arranging k+ 3 k + 1a k + 1a +1 k + 1a + 2 λa 1+ λas t - s pk,k + 3 s, t = × k + + 3 2.3 1 + λat 3 1 a 1 a 40 k+ 1 a t - s 3 3 3 3 t -s 2 p k,k + 3 s, t k + 1a k + 1a +1 k + 1a + 2 1+ λas = k+ 1 a 1 + λat 3! k + 1a + 2 ! 1+ λas = k + 1a - 1!3! 1 + λat k+ 1 a λa t - s 1 + λat λa t - s 1 + λat 3 3 Therefore, k + 1a + 2 1+ λas pk,k + 3 s, t = 3 1 + λat k+ 1 a λa t - s 1 + λat 3 (3.26) By induction we assume that when n = j – 1 k + 1a + j- 2 1+ λas pk,k + j-1 s, t = j-1 1+ λat k + 1a λa t -s 1+ λat j-1 is true. (3.27) When n = j, 1 + a k j 1 + a k j 1 pk,k + j s, t + λ pk,k + n s, t = λ pk,k + j - 1 s, t t 1 + λat 1 + λat Equivalently, 1+ a k + j 1+ a k + j-1 k + 1a + j- 2 1+ λas p k,k + j s, t + λ p s, t = k,k + n λ t 1+ λat 1+ λat j-1 1+ λat k + + j- 2 ! k + 1a λa t -s 1+ λat aλ 1+ λas t -s j-1 = 1+ a k + j-1 k+ + j k + 1a -1! j-1! 1+ λat k + 1a j 1 a 1 a k + + j-1 ! 1 + λas j-1 = aλ t -s k+ + j 1 k + a -1! j-1! 1+ λat 1 a k + 1a j 1 a Now we solve the equation above by use of the integrating factor Integrating factor e 1+a k + j λ dt 1+λat k + 1a + j = 1+ λat 41 j-1 Multiplying the equation above by the integrating factor, we have 1 a j j-1 k + 1a + j k + 1a + j k + a + j-1 ! 1+ λas 1+ λat p s, t = 1+ λat aλ t -s k,k + j k + 1a + j 1 t k + a -1! j-1! 1+ λat k+1 = k + + j-1 ! k + 1 a 1 a -1! j-1! 1+ λas k + 1a aλ t -s j j-1 Integrating, we have 1+ λay k + 1a + j t pk,k + j s, y = s k + + j-1 ! k + 1 a 1 a -1! j-1! 1+ λas t aλ y -s k + 1a j s t We wish to integrate y -s j -1 dy s Let u y s du 1 dy Limits When y s, u 0 and when y = t, u = t -s Therefore, t -s t y -s j -1 dy = s u j 1 0 t -s t - sj uj du = = - 0 j j 0 Thus, 1+ λat k + 1a + j pk,k + j s, t = k + + j-1 ! 1 a k + 1a -1! j-1! 1+ λas aλ k + 1a Equivalently, pk k + j k + + j-1 ! 1+ λas s, t = 1 a k + 1a k + 1a -1! j! 1+ λat k + k + 1a + j-1 1+ λas = j 1+ λat k + 1a 1+j a aλ j t - s λa t - s 1+ λat j j Therefore, k + 1a + j - 1 1 + λas p k k + j s, t = j 1 + λat j+ k + = j 1 a - 1 1 + λas 1 + λat k + 1a k + 1a λa t - s 1 + λat λa t - s 1 + λat 42 j j j j t - s j j j-1 dy Further, Let P = 1 + λas 1 + λat Then, q =1- P 1 + λas =1- 1 + λat = 1 + λat - 1 - λas 1 + λat = λat - λas 1 + λat = λa t - s 1 + λat Thus, j + k + pk,k + j s, t = j 1 a - 1 1 + λas 1 + λat is a negative binomial distribution with p = k + a1 λa t - s , j = 0, 1, 2, 3, …, 1 + λat j (3.28) λa t - s 1 + λas and q = . It depends on the 1 + λat 1 + λat length of the time interval t - s and on k and is thus stationary and not independent. 43 CHAPTER FOUR Probability Distributions for Transitional Probabilities Using Langranges Method 4.1 Introduction In this chapter, we shall solve equation (2.11) using the Lagranges method. We shall solve this equation for all the four pure birth processes starting with Poisson Process, then the Simple Birth process, the Simple Birth Process with immigration and finally the Polya Process. The general definitions for this Method are given in the first section. 4.2 General Definitions Now, from (2.11), we know that pk,k + n s, t + λ k +n t pk,k + n s, t = λ k +n -1 t pk,k + n - 1 s, t . t Let pi j s, t = Pk,k n s, t . Then, equation (2.11) becomes pi j s, t = - λ j t pi j s, t + λ j - 1 t pi, j - 1 s, t t (4.1) Define transition Probability generating function G s,t,z = p s, t z j ij j= 0 = p s, t z (Since pi j s, t 0 j i ) j ij j=i G s,t,z t z t p s, t G s,t,z z k pi j s, t z k - 1 z G s,t,z z j i j ij j i k p s, t z j i ij Boundary Condition pi i s,s 1, pi j s,s 0 j i G s,s,z = p s,s z ij j = zi j=i Multiplying equation (4.1) by z j and summing over j, we obtain j p s, t z = λ j t pi j s, t z j + ij t j i ji 44 λ tp ji j -1 i, j - 1 s, t z j j Thus t p s, t z j ij j i ji ji = - λ j t pi j s, t z j + z λ j - 1 t pi, j - 1 s, t z j - 1 (4.2) 4.3 Poisson Process When λ j t = λ, equation (4.2) becomes pi j s, t z j = t j i ji ji λ pi j s, t z j + z λ pi, j - 1 s, t z j - 1 = z - 1 λ p s, t z j i pi j s, t z j = j i t j ij z - 1 λ p s, t z ji j ij j p s, t z = z 1 λ pi j s, t z j ij t j i ji Therefore, t p s, t z j i ij j = z - 1 λ G s,t,z pi j s, t z j = z - 1 λ G s,t,z t j i pi j s, t z j t j i = z - 1 λ G s,t,z G s,t,z t = z - 1 λ G s,t,z log G s,t,z z - 1 λ Integrating with respect to t log G s,t,z z - 1 λ x log G s,t,z z - 1 λt c 45 (4.3) Taking exponential both sides, we have G s,t,z = e z - 1 λt + c k e z - 1λt G s,t,z = k e z - 1λt When t = s G s,s,z = k e z - 1λs But G s,s,z = p s,s z j ij = zi j=i Thus zi = ke z - 1λs k = zi e- z - 1λs G s,t,z = zi e- z - 1λs e z - 1λt = zi e z - 1λ t - s (4.4) p j k s, t is the coefficient of z n on G s,t,z . G s,t,z = z j e z - 1 λ t - s = z j e zλ t - s - λ t - s = z j e- λ t - s e λ t - s z Therefore, G s,t,z = z e i - λ t - s λ t - s z n! n 0 = e- λ t - s λ t - s z n z i n n! n 0 = n 0 n e- λ t - s λ t - s n! n zn i 46 Let k = n + i. Then p j k s, t e - λ t - s λ t - s n! n which is a Poisson distribution with parameter λ t - s . Thus p j k s, t = e - λt - s λ t - s n n = 0, 1, 2,… n! (4.5) 4.4 Simple Birth When λ j t = jλ, equation (4.2) becomes t p s, t z j ij j i j=i j=i = - jλpi j s, t z j + z j - 1 λpi j s, t z j - 1 j=i j=i = - z jλ pi j s, t z j - 1 + z 2 j - 1 λ pi j s, t z j - 2 (4.6) Therefore, t p s, t z ij j i t j = z z - 1 jλ p s, t z j -1 ij j i j i j i pi j s, t z j = z z - 1 λ k pi j s, t z j - 1 G s,t,z = z z - 1 λ G s,t,z t z G s,t,z - z z - 1 λ G s,t,z = 0 t z The auxiliary equation G s,t,z t z = = 1 - z z - 1 λ 0 (4.7) Taking the first two equations, we have t z = 1 - z z - 1 λ - λ t = z z z - 1 47 Integrating, we obtain - λt = z z z - 1 = - z z + z z -1 = - ln z + ln z - 1 + c c = - λt + ln z - ln z - 1 z - λt + ln z - 1 Taking another pair, we have t G s,t,z = 1 0 0 t = G s,t,z Integrating both sides, 0 t = G s,t,z c G s,t,z Therefore, G s,t,z u where u = e z -λt + ln z - 1 Then u = e -λt z z -1 u z - 1 = z e -λt uz - u = z e -λt uz - z e -λt = u z u - e -λt = u z u u - e -λt 48 Using boundary condition with t = s, we have G s,s,z = zi u = -λs u - e i Now, z e-λt z -1 G s,t,z = -λt ze -λs z -1 - e i z e-λt = -λt -λs z e - z - 1 e i z e-λt = -λt -λs -λs z e - e + e z e-λt e λs = z e-λ t - s - 1 + 1 i i -λ t - s ze = 1 - z 1 - e-λ t - s i (4.8) Let p e-λ t - s and q = 1 - e-λ t - s Pj,k s, t is the coefficient of z n on G s,t,z . 49 zp G s,t,z = 1 - z q i = z p 1 - z q i = z p i - j n n - z q n = z i pi -i -1 n n j + n - 1 n n -1 z q n Therefore, G s,t,z = -1 2n n = j + n - 1 i n i n p q z z n j + n - 1 i n i + n p q z n n Let k i n n = 0, 1, 2, … j + n - 1 i n Pi j s, t = p q n j + n - 1 -λ t - s = e n 1 - e i + n - 1 - λt - s pi j s, t = e n 1 - e i -λ t - s n Thus i Where k i n and n = 0, 1, 2, … which is negative binomial distribution. 50 - λ t - s n (4.9) 4.5 Simple Birth with Immigration In this case λ j t j v Equation (4.2) now becomes pi j s, t z j = j i t j i j i 1 j v pi j s, t z j + z j 1 v pi j s, t z j - 1 j i j i j=i +1 ji 1 = - λ k pi j s, t z j - v pi j s, t z j + z λ j-1 pi j s, t z j-1 + zv p i j s, t z j-1 = - λz k pi j s, t z j-1 - v G s,t,z + z 2 λ j-1 p i j s, t z j-2 + zv G s,t,z j i j=i+1 = z 2 λ - λz jpi j s, t z k-1 zv v G s,t,z j i Therefore, Pi j s, t z j = λz z - 1 G s,t,z + v z - 1 G s,t,z t j=i z G s,t,z - λz z - 1 G s,t,z = v z - 1 G s,t,z t z (4.10) Auxiliary equations G s,t,z t z 1 - λz z - 1 v z - 1 G s,t,z (4.11) Selecting the first two, 51 t z 1 - λz z - 1 - λ t z z z - 1 - λ t 1 1 z z z z -1 Integrating both sides - λt = - 1 1 z z z - 1 z - λt = - ln z + ln z - 1 + c - λt ln z ln z - 1 = c z c - λt ln z - 1 (i) Taking the last two, we have G s,t,z z - λz z - 1 v z - 1 G s,t,z Multiply both sides by v z - 1 - v z G s,t,z λ z G s,t,z - v z ln G s,t,z λ z Integrating both sides, e have z ln G s,t,z z - v λ - v ln z c ln G s,t,z λ G s,t,z e ln z - v λ c e ln z - v λ - v λ e z .k c v k z λ G s,t,z (ii) From (i), let 52 ue z -λt ln z - 1 e e -λt z ln z - 1 z e-λt z - 1 From (i) and (ii), v z λ G s,t,z = ψ u v λ G s,t,z = z ψ u - Boundary condition t = s. v G s,s,z z i z λ ψ u - u z u e-λs i v z z -1 u z - 1 = e -λs z uz - u= e-λs z z u e-λs u z u u e-λs Therefore, 53 u u -λs u e i v z e-λt v z -1 G s,t,z = z λ e-λt z - e-λs z -1 i+ ze-λt = z -λt -λs ze - z - 1 e - v λ - v λ = z .z i+ v λ v λ i+ v λ e-λs -λt -λs -λs z e - e + e e-λs = zi -λs -λt -λs e + z e - e i+ v λ i+ v λ e-λ t - s = zi 1 + z e-λ t - s - 1 -λ t - s e i =z 1 - z 1 - e-λ t - s i+ i+ v λ (4.12) Let p e t s and q 1 p 1 e t s . Then p G s,t,z = zi 1 - qz i+ v λ p j k s, t is the coefficient of z n of G s,t,z . G s,t,z = zi p = zi p i+ i+ v λ v λ 1 - qz - j+ - i + n n 0 v λ v λ v λ - qz n 54 G s,t,z = z i p i+ v λ -1 n i + v λ n 0 = -1 2n n 0 i + n 0 = v λ i + v λ n 1 n n -1 qz n n 1 i + λv n i n p q z n n 1 i + λv n i n p q z n k = i + n : n = 0,1,2,… i + pi j s, t = pi,i n s, t = v λ + n - 1 i + λv n p q , n = 0,1,2,… n Thus i + pi j s, t = v λ + n - 1 -λ t - s e n v λ + n - 1 - t - s λ i + λv -λ t - s 1-e e n 1 - e i+ v λ -λ t - s n Equivalently, i + pi j s, t = This is a negative binomial distribution function. 55 n (4.13) 4.6 Polya process 1 + aj In this case λ j t = λ 1 + λat Equation (4.2) now becomes t p s, t z j i ij i 1 + a j - 1 1 + aj j j -1 = - λ p s, t z + z λ pi, j - 1 s, t z ij j i 1 + λat j i 1 1 + λat j = - 1 + aj p s, t z z 1 + a j - 1 pi, j - 1 s, t z j - 1 ij j i 1 1 + λat j i Expanding, we have, t p s, t z j i ij j j j j-1 j-1 =- pi j s, t z +a k pi j s, t z -z pi, j-1 s, t z -za j-1 pi, j-1 s, t z j i ji 1 ji 1 1+ λat ji Equivalently, 2 G s,t,z = - G s,t,z az G s,t,z zG s,t,z z a G s,t,z t 1+ λat z z 1+ λat 2 G s,t,z = -G s,t,z - az G s,t,z + zG s,t,z + z a G s,t,z z z t = z 1 G s,t,z za z 1 G s,t,z z Re writing the above equation, we have 1+ λat G s,t,z - za z - 1 G s,t,z = z - 1 G s,t,z z λ t (4.14) Solving using Lagrange’s equation Auxiliary equations G s,t,z λ t z 1+ λat - za z - 1 z - 1 G s,t,z (4.15) Taking the first two equations, we have λ t z 1+ λat - za z - 1 - λa t z z z 1+ λat z z - 1 z z -1 Integrating both sides, we have 56 - λa t z z - 1+ λat z z -1 - ln 1 + λat = - ln z + ln z - 1 c - ln 1 + λat + ln z - ln z - 1 = c z ln =c 1 + λat z - 1 z u t 1 + λat z - 1 (i) Taking the last two, we have G s,t,z z - za z - 1 z - 1 G s,t,z Multiplying both sides by (z – 1), we have - 1 z G s,t,z ln G s,t,z a z G s,t,z Integrating, - 1 z ln G s,t,z a z 1 - ln z c ln G s,t,z a - e ln z kz - 1 a 1 a ec G s,t,z G s,t,z 1 k z a G s,t,z (ii) From (i) and (ii), we have 1 z a G s,t,z = ψ u t G s,t,z = ψ u t z - 1 a Using the boundary condition t = s, we have 57 G s,s,z = ψ u s z z ψ u s z i ψ u s z i - - 1 a 1 a 1 a From (i) z u t 1 + λat z - 1 u t z - 1 z 1 + λat z u t - u t z u t - z 1 + λat z = u t 1 + λat Therefore, z= u t 1 + λat u t 1 + λat - 1 Thus, u s 1 + λat u s 1 + λat - 1 i 1 a u s 1 + λat G s,s,z = u s 1 + λat - 1 i 1 a ψ u s z - 1 a Therefore, 58 u t 1 + λas G s,t,z = u t 1 + λas - 1 i+ 1 a z - 1 a z 1 + λat z - 1 1 + λas = z 1 + λat z - 1 1 + λas - 1 i+ z 1 + λas = z 1 + λas - 1 + λat z - 1 i+ 1 a z 1 a z 1 + λas 1 + λat G s,t,z = z 1 + λas - 1 + 1 1 + λat i+ 1 + λas 1 + λat = 1 z 1 1 + λas 1 + λat i+ - 1 a - 1 a 1 a zi 1 a zi 1 + λas 1 + λat = 1 + λat 1 λas 1 z 1 + λat i+ 1 a zi Therefore 1 + λas 1 + λat G s,t,z = λa t s z 1 1 + λat Let p i+ 1 a zi (4.16) 1 + λas 1 + λat q =1- p =1- 1 + λas λa t - s = 1 + λat 1 + λat 59 Pj,k s, t is the coefficient of z n on G s,t,z p G s,t,z = 1 - qz =p =p =p i+ 1 a i+ 1 a i+ 1 a G s,t,z = p i+ 1 a 1 a zi 1 - i+ a zi 1 - qz zi - i n n 0 zi n i -1 n 0 - qz n 1 a i zi n 0 i n 0 = i+ 1 a 1 a 1 a n 1 n n -1 qz n n 1 n n q z n n 1 i + 1a n n i p q z n k = i + n, n = 0,1,2,3,… i pi j s, t pi, i n s, t 1 a n 1 i + 1a n p q n Thus i + pi j s, t = pi , i +n s, t = 1 a + n - 1 1 + λas n 1 + λat for n = 0, 1, 2, … Which is negative binomial distribution. 60 i+ 1 a λa t - s 1 + λat n (4.17) CHAPTER FIVE INFINITESIMAL GENERATOR 5.1 Introduction Now, we wish to solve equation (2.11) below which was derived in Chapter two. This time, we will use a matrix method. pk,k + n s, t + λ k +n t pk,k + n s, t = λ k +n -1 t p k,k + n - 1 s, t t For k = 0, 1, 2, 3, .. equation (2.11) becomes k=0 Pj j s, t = - λ jPj j s, t + λ j-1p j j-1 s, t t = - λ jPj j s, t k=1 Pj j+1 s, t = - λ j+1Pj j+1 s, t + λ jp j j s, t t = λ jPj j s, t - λ j+1p j j+1 s, t k=2 Pj j+2 s, t = - λ j+2 Pj j+ 2 s, t + λ j+1p j j+1 s, t t = λ j+1Pj j+1 s, t - λ j+2 p j j+2 s, t k=3 Pj j+3 s, t = - λ j+3Pj j+3 s, t + λ j+2 p j j+2 s, t t = λ j+2 p j j+2 s, t - λ j+3Pj j+3 s, t k=4 Pj j+4 s, t = - λ j+4 Pj j+ 4 s, t + λ j+3p j j+3 s, t t = λ j+3p j j+3 s, t - λ j+4 Pj j+ 4 s, t k=n Pj j+n s, t = - λ j+n Pj j+ n s, t + λ j+n-1p j j+n-1 s, t t = λ j+n-1p j j+n-1 s, t - λ j+n Pj j+ n s, t 61 These equations forms a matrix expressed as follows j t p j j s, t p j, j 1 s, t j t 0 p s, t t j, j 2 0 p j, j 3 s, t t 0 p j, j 4 s, t . t . . . 0 t p j, j k s, t . . 0 0 0 0 . . 0 j1 0 0 0 . . 0 j1 j 2 0 0 . . 0 0 j 2 j 3 0 . . 0 0 0 j 3 j 4 . . 0 . . . . . . . . . . . . . . 0 0 0 0 . j n 1 j n . . . . . . . . p j j s, t . p j j 1 s, t . p j j 2 s, t . p j j 3 s, t . p j j 4 s, t . . . . . p j j n s, t . . (5.1) 5.2 Solving Matrix Equation p s, t p s, t t p s, t t p s, t log p s, t dt Integrating both sides, we have t t s s log p s, y dy dy log p s, y t s y t s log p s, t - log p s,s t - s log p s,s I log p s, t t - s 62 Taking the exponential of both sides p s, t e t - s Therefore, from Taylor expansion, p s, t = t - s k k 0 = k 0 k k t - s k = I k t - s n 1 n n If all the eigen values i of are distinct, then p s, t e t - s U diag ei t - s V (5.2) Obtaining the eigen values of , I 0 -λj - μ 0 0 0 0 . 0 0 . j - λ j+1 - μ 0 0 0 . 0 0 . 0 j1 - λ j+2 -μ 0 0 . 0 0 . 0 0 j 2 - λ j+3 -μ 0 . 0 0 . 0 0 0 j3 - λ j+3 -μ . 0 0 . 0 . . . . . . . . . . . . . . . . . . 0 0 0 0 0 . j+n-1 - λ j+n -μ . . . . . . . . . . 63 Taking n 1 n 1 matrix, we have - λ -μ - λ j j+1 -μ - λ j+ 2 -μ - λ j+3 -μ - λ j+ 4 -μ ... - λ j+ n -μ 0 1 j j1 j2 j3 j4 ... jn 0 n 1 j j1 j 2 j3 j 4 ... j n 0 Solving, we obtain μ = - λ j or μ = - λ j+1 or μ = - λ j+2 or … or μ = - λ j+n Assume that ji i = 0, 1, 2, …, n are distinct. We obtain n + 1 distinct eigen values. Next we obtain eigenvectors corresponding to n + 1 eigen values. Let zj z j1 z j 2 z . . z j n The eigen vector e ji for μ = - λ j+i i = 0, 1, 2, … is given by - λ j λ j 0 0 0 . . 0 0 0 0 0 . . - λ j+1 0 0 0 . . λ j+1 - λ j+2 0 0 . . 0 λ j+2 - λ j+3 0 . . 0 0 λ j+3 - λ j+4 . . . . . . . . . . . . . . 0 0 0 0 . . 64 0 zj zj 0 z z j1 j1 0 z j 2 z j 2 0 z z j 3 j3 - λ j+i 0 z j 4 z j 4 . . . . . . z j n - λ j+k z j n ej λ j z j - λ j+k z j+1 = - λ j z j+1 z j+1 = λj λ j+1 - λ j zj Let z j 1 z j+1 = λj λ j+1 - λ j λ j+1 z j+1 - λ j+2 z j+2 = - λ j z j+2 z j+2 = = λ j+1 λ j+2 - λ j λj z j+1 λ j+1 λ j+1 - λ j λ j+2 - λ j λ j+2 z j+2 - λ j+3 z j+3 = - λ j z j+3 z j+3 = = λ j+2 λ j+3 - λ j λ z j+2 λ j λ j+1 λ j+2 j+1 - λ j λ j+2 - λ j λ j+3 - λ j Similarly z j+n = λ λ j λ j+1 λ j+2 ...λ j+n-1 j+1 - λ j λ j+2 - λ j ... λ j+n - λ j λ j+k-1 k=1 λ j+k - λ j n = 65 The eigen vector ej 1 λj λ j+1 - λ j λ jλ j+1 λ j+1 - λ j λ j+2 - λ j ej . . n λ j+k-1 λ -λ k=1 j+k j ej 1 0 1 λj1 λj+ 2 - λj1 λ j 1λ j + 2 λ j + 2 - λ j 1 λ j + 2 - λ j . n λj + k -1 k=2 λ j + k - λ j + 1 66 ej 2 0 0 1 λj 2 λj+3 - λj 2 λ j 2λ j + 3 λ j +3 - λ j 2 λ j + 4 - λ j 2 . . n λj + k -1 k=2 λ j + k - λ j + 2 Let u ej e j1 e j2 . . . e jn then 1 λj λ j+1 - λ j λ j λ j+1 λ j+1 - λ j λ j+2 - λ j j j1 j 2 u j1 j k j 2 j j3 j . . n λ j+k-1 k=1 λ j+k - λ j 0 0 . . . 1 0 . . . λj 1 1 . . . λj 2 . . . λj + 2 - λj 1 λ λ j 1λ j + 2 j +2 - λ j 1 λ j + 2 - λ j n k=2 λ λj + 3 - λj 2 . . . . . . . . . . . . . λj + k - 1 j+k - λj + 1 n k=2 λ λj + k - 1 j+k - λj + 2 (6.3) 67 0 0 0 0 . . 1 P s, t Uedag t s U1p j, j k s,s 1 0 0 p j, j k s,s . . 0 The inverse of matrix U is given below 1 - λj λ j+1 - λ j λ j λ j+1 λ j+ 2 - λ j λ j+ 2 - λ j+1 - λ j λ j+1λ j+ 2 U -1 = λ j+3 - λ j λ j+3 - λ j+1 λ j+3 - λ j+ 2 . . n-1 λ j+ k n -1 j=0 λ j+ n - λ j+ k 0 0 . . . 1 0 . . . - λ j+1 1 . . . - λ j+ 2 . . . . . . λ j+ 2 - λ j+1 λ λ j+1λ j+ 2 j+ 2 - λ j+1 λ j+3 - λ j+1 n -1 k=0 j+3 - λ j+ 2 . . . . λ j+ k -1 n-2 -1 λ λ j+ k +2 - λ j+1 n-3 -1 n -2 k=0 λ λ j+ k + 2 j+ k +3 - λ j+ 2 (5.4) e diag Λ t - s e-λ j+k t - s 0 0 0 . . 0 0 e -λ j+k t - s 0 e 0 0 . . 0 0 . . 0 . . . . . . -λ j+k t - s -λ j+k t - s 0 0 . . . . . . 0 0 0 e 0 0 0 . . -λ j+k t - s e 0 (5.5) 68 0 0 0 0 . . 1 1 - λj λ j+1 - λ j λ j λ j+1 λ j+ 2 - λ j λ j+ 2 - λ j+1 -λ j λ j+1 λ j+ 2 U - 1 p j j+k s,s = λ j+3 - λ j λ j+3 - λ j+1 λ j+3 - λ j + 2 . . n-1 λ j+ k n -1 n=0 λ j+ n - λ j+ k Next e- j t - s - λ j - j1 t - s e λ j+1 - λ j λ j λ j+1 - j2 t - s e λ j+ 2 - λ j λ j+ 2 - λ j+1 diag t s - 1 -λ j λ j+1 λ j+ 2 e U p j j+k s,s = - j3 t - s e λ λ λ λ λ λ j+3 j j+3 j+1 j+3 j + 2 . . n-1 λ n - t - s j+ k e jn -1 n=0 λ j+ n - λ j+ k 69 - λ t -s e j - λ t -s - λ t -s λj e j λ e j - i λ j+1 - λ j λ j+1 - λ j - λ t -s - λ t -s - λ t -s λ j λ j+1 e j λ j+1 λ j λ j+1 e j λj e j + . + λ j+1 - λ j λ j+ 2 - λ j λ j+1 - λ j λ j+ 2 - λ j+1 λ j+2 - λ j λ j+ 2 - λ j+1 P s,t = - λ t -s - λ t -s - λ t -s - λ t -s λ j λ j+1 λ j+ 2 e j λ j λ j+1 λ j+ 2 e j λ j λ j+1 λ j+ 2 e j λ j λ j+1 λ j+ 2 e j + λ j+1 - λ j λ j+ 2 - λ j λ j+3 - λ j λ j+1 - λ j λ j+ 2 - λ j+1 λ j+3 - λ j+1 λ j + 2 - λ j λ j + 2 - λ j + 1 λ j+3 - λ i + 2 λ j+3 - λ j λ j+3 - λ j+1 λ j+3 - λ j+ 2 . . - λ t -s - λ j+n t -s - λ t -s - λ t -s n n n n-1 λ λ j+ k -1 e j λ i + j-1 λ j+k-1 λ j e j+1 λj λ j+1 e j+2 n j+ k e - . + . . + ... + -1 k=1 k=0 λ j+ n - λ j+ k λ j+ k - λ j k=2 λ j+k - λ j+1 λ j+1 -λ j k=3 λi + j + 1 - λi + 1 λ j+2 -λ j λ j+2 -λ j+1 Therefore, 70 -λ t - s n λ j + k - 1e j p j j+ n s, t = + k=1 λ j + k - λ j n-1 λ e-λ j t - s j + k = k=0 + n λj + k - λj k=1 n λ j + k - 1e k=1 n λ j+k n -λ j +1 t - s - λj +1 λ j + k - 1e + k=1 n λ k=0 k¹1 λ j + ke - λj+ 2 + ... + k=0 k¹2 n-1 λ j+k n-1 -λ j + 2 t - s j+k e k=0 n λj + k - λj +1 λ + k=0 k¹1 -λ t - s n-1 e j = λj+ k n + k=0 λ j + k - λ j k=1 j+k e -λ j + 2 t - s k=1 n λ - λj+ 2 j+k k=0 k¹2 e λ -λ j t - s n j+k - λj +1 k=0 k¹1 + e 71 j+k - λj+ n λ j+k λ j + ke k=0 + ... + n-1 λ λ j + k j + n k=0 n-1 -λ j t - s n k=0 k¹2 -λ t - s n 1 n e jv λ j + k n k 0 v0 λ j + k - λ j + v k 0 kv λ k=0 n -λ j +1 t - s k=0 n-1 -λ j + n t - s - λj+ 2 -λ j + n t - s -λ t - s e j + ... + n-1 λ λ j+k j+n k=0 5.3 Simple Birth Process j i j i - j+ v λ t -s n-1 n e p j j + n s, t = j + k λ n k=0 v=0 j + k λ - j + v λ k=0 kv - j λ t - s - v λ t - s n-1 n e e = λ n j + k n k=0 v=0 λ n k - v k=0 kv n-1 n = e - j λ t - s j + k λ k=0 v=0 -vλ t - s e n k v k=0 kv j + n 1 ! 1 -jλ t -s =e j - 1! n k k=1 - λ t - s -2λ t - s -3λ t - s -n λ t - s e e e e ... n n n n-1 k - 1 k - 2 k - 3 k - n k=0 k=0 k=0 k=0 k 1 k 2 k 3 Therefore p j j+ n s, t = e - j λ t - s j+ n -1! 1 + -1 1 e- λ t -s e-2λ t -s -1 3 e-3λ t -s e-4λ t -s 1. n -1! 2! n - 2 ! 3! n -3 ! 4! n - 4 ! j-1! n! -k λ t -s -1 e e ... -1 k! n - k ! n n -n λ t -s k Therefore, p j j+n s, t = e - j λ t - s j+ n -1! n -1 k e-k λ t -s k! n - k ! j-1! k=0 72 ... Simplifying, p j j+ n s, t = e =e =e - j λ t - s - j λ t - s - j λ t - s j+ n -1! n n -1 k e-k λ t -s k! n - k ! j-1! n k=0 j+ n -1! n -1 k n e- λ t -s k k! n - k ! n j-1! k=0 j+ n -1 n n - e- λ t -s k=0 k n j+ n -1 e- j λ t - s 1 - e- λ t -s = n k n Therefore j+ n -1 e- λ t - s p j j+ n s, t = n 1 - e j - λ t -s 73 n n = 0, 1, 2, 3, …… (6.8) 5.4 Simple Birth with Immigration Now, recall that -λ t - s n 1 n e ji p j j+ n s, t λ j + k n k 0 i 0 λ j + k - λ j + i k 0 k i In this case λ j = jλ + v n 1 n p j j+ n s, t j + k λ + v n k 0 i 0 k 0 k i - j + i λ + v t - s e j + k λ + v - j + i λ + v n-1 n e - j + i λ + v t - s = j + k λ + v n k=0 i=0 k - i λ k=0 k i -iλ t - s n-1 n e - jλ + v t - s =e j + k λ + v n k=0 i=0 k - i λ k=0 k i n-1 v n e- iλ t - s - jλ + v t - s =e j + k + λ λ i=1 n n k=0 λ k - i k=0 k i - iλ t - s n-1 v n e - jλ + v t - s n =e λ j + k + λ i=1 n n k=0 λ k - i k=0 k i 74 n-1 n e- iλ t - s v - jλ + v t - s p j j+ n s, t = e j + + k n λ i=1 k=0 k - i k=0 k i Expanding the expression inside the summation sign, n-1 e- 0λ t -s e- λ t -s e- 2λ t -s e- rλ t -s e- nλ t -s - jλ + v t -s v p j j+ n s, t = e + n + n +... + n + ... + n-1 j+ λ + k n k=0 k -0 k -1 k - 2 k r k n k=1 k=0 k=0 k=0 k=0 k¹1 k¹2 k¹r n-1 1 e- λ t -s e- 2λ t -s e- rλ t -s e- nλ t -s = e- jλ + v t -s j+ λv + k + + +... + + ... + r n k=0 -1 r! n-r -1 n! n! -1 n-1! -1 -2 n-2 r n n-1 1 -1 e- λ t -s -12 e- 2λ t -s -1 e- rλ t -s + ... + -1 e- nλ t -s = e- jλ + v t -s j+ λv + k + + +... + 2 n-2 r! n-r n! n-1! k=0 n! Therefore, p j j+ n s, t = e - jλ + v t -s n-1 j+ n v λ + k k=0 =e =e =e - jλ + v t -s - jλ + v t -s - jλ + v t -s - jλ + v t -s e- rλ t - s r! n - r r j+ λv + n - 1! n -1r e- rλ t - s r! n - r j+ λv - 1! r=0 j+ λv + n - 1! n -1r e- rλ t - s n! r! n - r n! j+ λv - 1! r=0 j+ λv + n - 1! n -1r n!e- rλ t - s r! n - r j+ λv - 1! n! r=0 j+ v + n - 1 = e- jλ + v t -s λ n =e r=0 -1 j+ λv + n - 1 n n n! - e - λ t - s r! n - r r=0 r n - λ t - s -e r=0 r n j+ v + n - 1 - jλ + v t -s -λ t -s = λ 1- e e n 75 n r Therefore j+ λv + n - 1 - j+ v t -s -λ t -s p j j+n s, t 1- e e n n Equivalently, j+ λv + n - 1 - λ t -s j+ λv -λ t-s p jj+n s, t = 1-e e n 76 n n = 0,1,2, … (5.9) 5.5 Polya Process 1 aj In this case, λ j = λ 1 at -λ j v t - s n 1 n e p j j+ n s, t λ j + k n k 0 v 0 λ j + k - λ j + v k 0 kv Substituting, we have 1 + a j + v - λ t - s 1 + λat n-1 n 1 + a j + k e p j j+ n s, t = λ n 1 + a j + k 1 + a j + k k=0 1 + λat v=0 λ - λ 1 + λat 1 + λat k=0 k v 1 + a j + v - λ t - s n-1 1 + λat 1 + a j + k n e = λ n 1 + aj + ak - 1 - aj - av k=0 1 + λat v=0 λ 1 + λat k=0 k v 1 + a j + v - λ t - s n-1 1 + a j + k n e 1 + λat = λ n λa k=0 1 + λat v=0 k - v k=0 1 + λat kv 1 + a j + v - λ t - s n n-1 n 1 + λat λ e = 1 + a j + k n n 1 + λat k=0 v=0 λa 1 + λat k - v k=0 kv 1 + aj + av - λ t - s n-1 n e 1 + λat = 1 + a j + k n k=0 v=0 a n k - v k=0 kv 77 p j j+ n s, t a n n-1 k=0 =e 1 a 1 + aj + av - λ t s n e 1 + λat + j + k n v=0 a n k - v k=0 kv 1 + aj - λ t - s 1+ λat n-1 j + 1 a k=0 n But k - v = -1 v v! n - v and av - λ t - s n e 1 + λat + k n v=0 k - v k=0 kv n 1 j 1a k k 0 k=0 kv p j j+ n s, t j + + n - 1 e j + 1a - 1 j + 1a + n - 1 e j + 1a - 1 1 + aj - λ t - s 1+ λat j + 1a + n - 1 e j + 1a - 1 1 + aj - λ t - s 1+ λat 1 a 1 + aj - λ t - s 1+ λat j + 1a + n - 1 j + 1a - 1 av - λ t - s v n -1 e 1 + λat v! n - v v=0 - a λ t - s v n - e 1 + λat v! n - v v=0 - av λ t - s v n n - e 1 + λat v! n - v n v=0 1 + aj av j + 1a + n - 1 - 1+ λat λ t - s n - 1 + λat λ t - s e - e j + 1a - 1 n v=0 j+ 1 a j+ 1 a v n v! n - v v 1 + aj av + n - 1 - 1+ λat λ t - s n n - 1 + λat λ t - s - e e n v=0 v v 1 + aj av + n - 1 - 1+ λat λ t - s n n - 1 + λat λ t - s e e n v=0 v 78 1 + aj a - λ t - s 1 e 1 + λat 1 + aj aλ t - s 1 e 1 + λat j+ p j j+ n s, t 1 a + n - 1 - 1+ λat λ t - s e n j+ 1 a + n - 1 - 1+ λat λ t - s e n j + p jj+n s, t = 1 a + n - 1 - 1+ λat λ t - s e n n n Therefore, 1 + aj This is a negative binomial distribution. 79 aλ t - s 1 - e 1 + λat n (5.10) CHAPTER SIX FIRST PASSAGE DISTRIBUTIONS FOR PURE BIRTH PROCESSES 6.1Derivation of Fn t from pn t Let pn t = Prob X t = n and Fn t = Prob Tn t where X t = the population at time t and Tn = the first time the population size is n. Consider the following diagram Time 0 Size X(0) Tn t X(Tn) = n X(t) For a birth process Tn t X Tn X t , ie n X t Tn = t X Tn = X t , i.e n = X(t) Tn t X t n Prob Tn t Prob X t n i.e Fn t = 1 - Prob X t n = 1 - Prob X t n 1 n -1 =1- P t j0 j Thus Fn t = 1 - n -1 p t (6.1) j j=0 Next, f t d Fn t dt 80 6.2 Poisson Process 6.2.1 Determining Fn t for Poisson Process e- λ t λ t pj t = ; j = 0, 1, 2, 3, ..... j! j Therefore n -1 Fn t = 1 - e - λt λ t j! j=0 =1- e - λt λ t n -1 j j! j=0 But j d Fn t = f n t . Therefore, dt fn t = - j d - λ t λt e dt j ! n -1 j=0 j n -1 λt + = - - λ e- λ t j! j = 0 n -1 =λ e λt - λt j! j=0 n -1 =λ e λt - λt = λe n -1 n -1 - λ j=1 j j! j=0 - λt j j! λj λ t e- λt j! j=0 j λ t j! j-1 j -1 e- λt λt e- λ t - λ j = 1 j-1 ! λ t j=0 n -1 j -1 n -1 j λ t - j = 1 j - 1 ! j -1 n -1 Expanding the expression inside the summation sign, we get f n t = λ e- λ t 1 + λt 1! + λ t 2 2! + λ t 3 3! + ..... + λ t n - 2 n - 2 ! + λ t n - 1 n - 1 ! -1- λt 1! - λ t n 2! - ....... - λ t n - 2 n - 2 ! Therefore e- λ t λ t fn t = n - 1! n -1 = λn - λ t n - 1 e t for t > 0; n = 1, 2, 3,..... n Which is gamma n, λ 81 (6.2) 6.2.2 Mean and Variance of fn t for the Poisson Process Mean E Tn = n λ - λt n -1 t e t dt n 0 n λ = n Let y = λ t t = 0 e- λ t t n dt y dy and dt = . λ λ Therefore, n E Tn λ = n y e λ n -1 -y 0 n λ 1 = × n +1 n λ 0 dy λ e- y y n dy = 1 nn × n +1= λ n λn = n λ (6.3) Variance E T 2 n = 0 n t λ -λ t n - 1 e t dt n 2 n λ = n 0 n λ = n 0 e-λ t t n + 1 dt y e λ n +1 dy λ -y n λ 1 = × n + 2 e- y y n + 1 dy λ n 0 = 1 2 λ n n+2= 1 2 λ n × n + 1 × n × n = But Var T = E Tn2 - E T . Thus, 2 n n + 1 n 2 Var T = - 2 λ2 λ 82 n n + 1 λ 2 Simplifying, Var T = n λ2 (6.4) 83 6.3 Simple Birth Process 6.3.1 Determining fn t for the Simple Birth Process Initial conditions: when t = 0, X(0) = 0 Pj t = e- λ t 1 - e- λ t j -1 Therefore, Fn t = 1 - n -1 n -1 P t = 1 - e 1 - e - λt - λt j -1 j j=0 =1- e - λt j=0 n -1 1 - e - λt j -1 j=0 = 1 - e- λ t 1 + 1 - e- λ t + 1 - e - λ t + 1 - e- λ t + ....... + 1 - e - λ t 2 1 - 1 - e- λ t n - 1 = 1 - e- λ t 1 - 1 - e- λ t 1 - 1 - e- λ t n - 1 = 1 - e- λ t e- λ t = 1 - 1 - 1 - e- λ t 1 e t n -1 3 n-2 n 1 Differentiating both sides with respect to t, we get f n t = n - 1 1 - e- λ t n-2 λ e = λ n - 1 e- λ t 1 - e- λ t - λt n-2 ; t > 0, n = 2, 3, 4, .... Definition of Exponentiated Exponential Distribution Let F x = G x where G x = the old or parent cdf and F x = the new cdf. α > 0. α Then f x = α -1 dF = α G x g x is called exponentiated pdf. dx If g x = λ e- λx (exponential pdf), then G x = 1 - e- λ t 84 f x = α 1 - e- λt α -1 = α λ 1 - e- λt λ e- λt α -1 e- λt is called exponentiated exponential pdf or generalized exponential pdf with parameters λ and α. Put α = n - 1, then we have f x = λ n - 1 e- λ t 1 - e- λt n-2 Thus fn x = λ n - 1 e- λ t 1 - e- λt n-2 , t > 0, n > 1 (6.5) is an exponentiated exponential distribution with parameters λ and n - 1 . Initial Condition: X 0 = n0 Fn t 1 n 1 P t j n0 j j 1 n0 t Pj t 1 e n0 t e n 1 0 fn t d d Fn t 1 dt dt j n0 j n0 , n0 1, n0 2, .... d Pj t dt j n0 n 1 j 1 n0 t 1 e n0 t e j n 0 0 1 P t n j d j 1 n0 t 1 e n0 t e dt j n 0 n 0 1 j n0 d n 1 j 1 n 0 t 1 e n0 t e dt j n 0 n 0 1 j n0 d n 1 j 1 n 0 t 1 e n0 t e dt j n 0 n 0 1 j n0 n 1 P t j n0 j n0 j 1 0 j 1 n0 t 1 e n0 t e j n 0 1 n j n0 0 d j 1 n0 t 1 e n0 t e dt j n n 0 1 j n0 But j = n 0 + k where k = 0, 1, 2, ….. fn t d n 0 k 1 n 0 t 1 e n0 t e dt n0 k n n 0 1 85 k (6.6) When n 0 1 fn t = d k - λt - λt k e 1 - e dt k = n - 1 0 = d - λt - λt k e 1 - e dt k = n - 1 = k d - λt 1 - e- λ t e dt k=n -1 = n -1 n n +1 n +2 d - λt e 1 - e- λt + 1 - e- λt + 1 - e- λt + 1 - e- λt + .... dt = n -1 2 3 d - λt e 1 - e- λt 1 + 1 - e- λt + 1 - e- λt + 1 - e - λt + .... dt = n -1 d - λt 1 e 1 - e- λt dt 1 - 1 - e- λt = n -1 d - λt 1 e 1 - e- λt - λt dt e = d - λt n - 1 1 e dt = n - 1 1 - e- λt n-2 - λ - e- λt = n - 1 λ e- λt 1 - e- λt n-2 (6.7) 6.3.2 Mean and Variance of fn t for the Simple Birth Process Initial Condition: X(0) = 0 Mean E Tn = t f n t dt = 0 0 t λ n - 1 e- λ t 1 - e- λ t = λ n - 1 t e- λ t 1 - e- λ t 0 n-2 dt n-2 k n - 2 = λ n - 1 t e- λ t - e- λ t dt 0 k = 0 k n-2 k n - 2 - λt - λtk = λ n - 1 - 1 dt 0 t e k=0 k 86 n-2 dt n-2 k n - 2 - k + 1 λ t E Tn = λ n - 1 - 1 dt 0 t e k=0 k Put y = k + 1 λ t t = y dy and dt = k + 1 λ k + 1 λ n-2 k n - 2 E Tn = λ n - 1 t e- λ t - e- λ t dt 0 k = 0 k n-2 k n - 2 - λt - λtk = λ n - 1 - 1 dt 0 t e k=0 k n-2 k n - 2 - k + 1 λ t = λ n - 1 - 1 dt 0 t e k=0 k Put y = k + 1 λ t t= y dy and dt = k + 1 λ k + 1 λ n-2 y dy k n - 2 -y E Tn = λ n - 1 - 1 0 t e × k + 1 λ k + 1 λ k=0 k n-2 1 k n - 2 = λ n - 1 - 1 2 k=0 k k + 1 λ 0 n-2 1 k n - 2 = λ n - 1 - 1 × k k + 1 λ 2 k=0 n-2 1 k n - 2 = λ n - 1 - 1 2 2 k=0 k k + 1 λ n -2 1 k n - 2 = n - 1 - 1 2 k=0 k k + 1 λ = = = 1 λ 1 λ 1 λ t e- y × y 2 - 1 dy 2 - 1 n - 1 n-2 n - 2 1 2 k k + 1 k k=0 - 1 n - 1× k ! n - k - 2 ! × n-2 n-2 ! k k=0 n-2 - 1 k=0 k × n - 1 ! k ! n - 1 - k - 1 87 ! 1 k + 1 × 2 1 k + 1 2 E Tn = 1 λ 1 = λ n-2 - 1 k n - 1 ! k + 1! n - 1 - k - 1 × k=0 n-2 - 1 k n -1 k + 1 × k + 1 k=0 Put k + 1 = j 1 E Tn = λ n -1 - 1 j=1 n -1 1 = λ 1 = λ n -1 1 = λ n -1 1 = λ n -1 k n - 1 × j j - 1 - 1 k j j=1 - 1 k+2 j j=1 - 1 j=1 j +1 j - 1 j +1 j=1 2 n - 1 × j n - 1 × j n - 1 × j n - 1 1 × × j j Next, we wish to proof that j -1 - 1 j +1 j=1 n - 1 1 × × = j j j -1 j=1 1 j Proof From the identity 1 + x + x 2 + ...... + x n - 1 = n -1 xk = k=0 1 - xn 1- x i.e 1 - xn 1- x If we put x = 1 - t we have n -1 1 - t k=0 1 - 1 - t = 1 - 1 - t n k 1 - 1 - t n = t 88 ! × 1 k + 1 n -1 1 - t k k=0 n = 1 - 1 - t t - 1 0 t 1 Integrate this identity w.r.t t. Thus 1 n -1 1 - t 1 k 0 k=0 n dt= 1 - 1 - t t - 1 dt 0 n -1 1 L.H.S = 1 - t k dt k=0 0 Put u = 1 - t n -1 L.H.S = k=0 n -1 = k=0 du = - dt 0 k u . - du = 1 u . du k k=0 1 uk + 1 k + 1 = 0 1 n -1 0 n -1 1 k +1 k=0 1 n R.H.S = 1 - 1 - t t - 1 dt 0 1 = 1 0 1 R.H.S = 1 0 n n k=0 k - t 1 + k n n k =1 -1 t dt k - t k - 1 t dt n n k k = - -1 t t - 1 dt k 0 k = 1 1 1 n k +1 n = -1 t k - 1 dt k 0 k = 1 1 n k +1 n = -1 t k - 1 dt k 0 k = 1 n = -1 1 k +1 k =1 n = -1 k +1 k =1 n -1 k =1 k +1 n 1 × = k k n tk k k 0 n 1 × k k n -1 k=0 1 = k +1 n 1 k k =1 89 Hence E Tn = 1 n -1 1 l j=1 j (6.8) End of proof Variance Next E Tn2 = t 2 λ n - 1 e- λ t 1 - e- λ t n-2 dt 0 0 n - 2 - λt k - e dt k = 0 k n-2 = λ n - 1 t 2 e- λ t = λ n - 1 t 2 e- λ t 0 n-2 = λ n - 1 e n-2 - 1 k=0 - λ t k + 1 0 k=0 k n - 2 - λtk e dt k n - 2 k t2 - 1 dt k = 2 n - 1 n - 2 1 k n - 2 - 1 3 2 λ k=0 k k + 1 = 2 λ2 = 2 λ2 2 = 2 λ n-2 - 1 k k=0 n-2 n - 1 n - 2 ! × 1 k ! n - 2 - k ! k + 13 n - 1 ! k k=0 n-2 - 1 k=0 k n -1 1 × 2 k + 1 k + 1 n-2 k n - 2 - λ t k + 1 2 = λ n - 1 - 1 t dt e k=0 k 0 Put y = k + 1 λt 1 - 1 k + 1 ! n - 1 - k + 1 ! × k + 1 t= y dy and dt = k + 1 λ k + 1 λ 90 2 E T 2 n 2 y dy k n - 2 -y = λ n - 1 - 1 e k=0 k 0 k + 1 λ k + 1 λ n-2 n-2 1 k n - 2 = λ n - 1 - 1 3 k=0 k k + 1 λ n-2 1 k n - 2 = λ n - 1 - 1 3 k=0 k k + 1 λ E Tn2 = = 2 λ n - 1 n - 2 1 k n - 2 - 1 3 3 λ k=0 k k + 1 2 λ2 n-2 - 1 k=0 k+2 n -1 1 × 2 k + 1 k + 1 Put k + 1 = j E Tn2 = 2 λ2 n -1 - 1 j=0 j +1 n - 1 1 × 2 j j 91 e -y y 2 dy -y y3 - 1 dy 0 e 0 6.4 Pure Birth Process with Immigration 6.4.1 Determining Fn t for Pure Birth Process with Immigration Initial Condition: X 0 = n0 Now, for simple birth with immigration, k m 1 t m t k Pj t Pn0 k t e 1 e k Fn t 1 1 n 1 P t j 0 j k m 1 t m t k e 1 e k n0 k 0 n 1 d Pj t 0 dt j 0 d k m 1 t m d n 1 k m 1 t m t k t k e 1 e e 1 e k k dt n 0 k 0 dt n 0 k 0 k m 1 t m t k e 1 e 0 k n0 k n d n 1 k m 1 t m d k m 1 t m t k t k e 1 e e 1 e k k dt n 0 k 0 dt n 0 k n Therefore fn t fn t d d n 1 k m 1 t m t k Fn t e 1 e k dt dt n0 k 0 d k m 1 t m t k e 1 e k dt n0 k n t m k m 1 d t k e 1 e k dt n0 k n 0 Let n 0 k n r fn t = = k r n n0 d - λ t m r + n - n 0 + m - 1 - λ t r + n - n0 e 1 - e r + n - n0 dt r = 0 d - λ t m - λ t n - n0 e 1 - e dt r + n - n 0 + m - 1 - λt r 1 - e r + n - n0 r = 0 92 fn t = d - λ t m - λ t n - n0 e 1 - e dt r + n - n 0 + m - 1 ! r + n - n ! m - 1 ! 1 - e r=0 - λt r 0 = m n - n0 d 1 e- λ t 1 - e- λ t dt m - 1 ! r + n - n 0 + m - 1 ! 1 - e- λ t r r + n - n0 ! r=0 = m n - n0 d 1 e- λ t 1 - e- λ t dt m - 1 ! n - n 0 + m - 2 ! r i + n - n 0 + m - 1 1 - e- λ t r i + n - n 0 r=0 i=0 0 - 1 ! = d n - n 0 + m - 2 ! dt m - 1 ! n - n 0 - 1 ! e- λ t 1 - e- λ t d n - n 0 + m - 2 ! dt m - 1 ! n - n 0 - 1 ! e- λ t 1 - e- λ t = n - n m n - n0 r r=0 i=0 m n - n0 r=0 i=0 i + n - n 0 + m - 1 1 - e- λ t r i + n - n 0 i + n - n 0 + m - 1 1 - e- λ t r i + n - n 0 Now, i=0 i + n - n 0 + m - 1 n+ m- n 0 - 1 n + m - n 0 n + m - n 0 + 1 = . . ..... i + n - n0 n - n0 n - n0 + 1 n - n0 + 2 n+ 2m- n 0 - 3 n+ 2m- n 0 - 2 n+ m- n 0 - 1 n+ 2m- n 0 . . . .... n - n0 + m - 2 n - n0 + m - 1 n - n0 + m n - n0 + m + 1 fn t = = 1 n n 0 n n 0 1 n n 0 2 .... n n 0 m 2 n n0 1 n n0 m 2 n + m - n 0 - 2 ! n - n 0 - 1 ! d dt m - 1 ! n - n 0 - 1 ! n - n 0 + m - 2 ! m n - n0 d 1 e- λ t 1 - e- λ t dt m - 1 ! e- λ t 1 - e- λ t m r=0 = m n - n0 -1 d 1 e- λ t 1 - e- λ t 1 - 1 - e- λ t dt m - 1 ! = m n - n0 -1 d 1 e- λ t 1 - e- λ t e- λ t dt m - 1 ! = m n - n0 λ t d 1 e- λ t 1 - e- λ t e dt m - 1 ! 93 1 - e r=0 1 - e - λt r n - n0 - λt r fn t = n - n0 d 1 e- λ t m + λ t 1 - e- λ t dt m - 1 ! = n - n0 d 1 - λt m -1 e 1 - e- λ t dt m - 1 ! = n - n0 n - n0 - 1 1 - m - 1 λ e- λ t m - 1 1 - e- λ t + e- λ t m - 1 n - n 0 1 - e - λ t .λ e - λ t m - 1 ! = n - n0 -1 λ - λt m -1 e 1 - e- λ t - m - 1 + n - n 0 e - λ t 1 - e - λ t m - 1 ! = n - n0 - 1 λ - λt m -1 e 1 - e- λ t n - n 0 e- λ t - m - 1 1 - e- λ t m - 1 ! = n - n0 - 1 λ - λt m -1 e 1 - e- λ t n - n 0 e- λ t + m - 1 e- λ t - m + 1 m - 1 ! = n - n0 - 1 λ - m -1 λt e 1 - e- λ t n - n 0 + m - 1 e- λ t - m + 1 m 1 ! But m = n 0 + v λ - n 0 + - 1 λ t n - n 0 - 1 λ v - λt v fn t = e λ 1 - e- λ t n - n 0 +n 0 + - 1 e -n 0 - + 1 v λ λ n 0 + - 1 ! λ v v - n 0 + - 1 λ t n - n0 λ = e λ 1 - e- λ t v n 0 + - 1 ! λ = = v - λt n+ - 1 e λ n λ + v - λ - 0 * λ n - n0 λ - n λ + v - λ t e0 1 - e- λ t n λ + v - λ e- λ t - n 0 λ + v - λ n0λ + v - λ !λ λ n - n0 1 - n λ + v - λ t e0 1 - e- λ t n λ + v - λ e- λ t - n 0 λ + v - λ n λ + v λ 0 ! λ Thus, fn t = 1 n0λ + v - λ λ ! e - n0 λ + v - λ t 1 - e nλ + v - λ e - λt n - n 0 94 - λt - n 0λ + v - λ (6.9) When n 0 = 1, fn t = = n -1 1 e- v t 1 - e- λ t n λ + v - λ e- λ t - v v ! λ n -1 1 - v+λ t 1 - e- λ t n λ + v - λ e - v e- v t v ! λ (6.10) When v = 0, fn t = n - n0 1 - n -1 λt e 0 1 - e- λ t λ n - 1 e- λ t - λ n 0 - 1 n 0 - 1 ! = n - n0 λ - n -1 λt e 0 1 - e- λ t n - 1 e- λ t - n 0 - 1 n 0 - 1 ! = n - n0 λ 1 - e- λ t n 0 - 1 ! n - 1 e - n0λ t - n 0 - 1 e 5.4.2 Mean and Variance 95 - n 0 - 1 λ t (6.11) 6.5 Polya Process 6.5.1 Determining fn t for the Polya Process Initial Condition: X(0) = 0 Fn t = 1 - j+ 1a - 1 1a j where p t = p t p q, j j j j=0 n -1 1 j = 0, 1, 2, 3,.... . p j t can also be j j+ 1a - 1 1 a λ a t written as p j t = ; 1 a - 1 1 + λ a t 1 + λ a t 1 j+ 1a - 1 1 a λ a t Fn t = 1 - 1 j= 0 a - 1 1 + λ a t 1 + λ a t n -1 j = 0, 1, 2, 3,.... j 1 j+ 1a - 1 j -j+ a ; t > 0 and n = 1, 2, 3,.. = 1 - λ a t 1 + λ a t 1 1 j= 0 a n -1 Differentiating both sides with respect to t, we have fn t = d Fn t = dt j+ 1a - 1 1 j= 0 a - 1 n -1 =- j+ 1a - 1 d j - j + 1a λ a t 1 + λ a t 1 j= 0 a - 1 dt n -1 - j + 1 a j -1 jaλ λ a t 1 + λ a t - j + 1 n - 1 j+ a - 1 fn t = λ j + 1 j = 0 a - 1 But j+ 1a - 1 j + 1 a - 1 1 a 1 a a λ a t 1 + λ a t j - j+1+ 1 a 1 a a λ λ a t 1 + λ a t j - j+1+ 1 a j+ 1a - 1! a j j+ 1a - 1 = and a j = 1 1a - 1! j! a - 1 a j+ 1a - 1! = j+ 1a - 1! = j+ 1a - 1 1 1 1 1 a a a - 1 ! j - 1 ! a ! j - 1! n -1 j j -1 1 n -1 λ a t j+ 1a - 1 λ a t j+ a f n t =λ 1 1 - 1 1 j+1+ j+ a a j =1 a j = 0 a 1 + λ a t 1 + λ a t If we take u = j - 1, the above equation becomes n -1 j 1 λ a t j+ a f n t =λ 1 1 j+1+ a j = 0 a 1 + λ a t u u- 1 λ a t 1 u +1+ 1 a u = 0 a 1 + λ a t n-2 96 - j + 1 a j+ 1a - 1 j -1 - ja λ a t 1 + λ a t 1 j =1 a - 1 n -1 j+ 1a - 1! j+ 1a - 1! j + 1a j + 1a ! j + 1a 1 = 1 = 1 a = 1 j + a a = 1 !j ! a - 1! j! 1 a a 1 ! j! a n - 1 + 1a fn t = λ 1 a λ a t n -1 1 + λ a t n + 1 a λ n - 1 + 1a ! λ a t = 1 n + 1 ! n - 1! 1 + λ a t a a n -1 = λ n+ 1a 1 a + 1 × n = 1 a 1 a n+ fn t = 1 a 1 a n -1 tn -1 1 + λ a t n λ a × λ a n × n -1 1 + λ a t λ a × n n+ 1a = 1 a n + λ a t × n 1 a n -1 1 + λ a t n+ 1a λ =a × × 1 a a λ n+ λ a t n + 1 a 1 1 a λa n n + 1 a , t> 0, n > 0 , t> 0, n > 0 1 a tn -1 t + n + 1 a 1 λa tn -1 t + n + n + 1 a , t > 0, n > 0, 1 λa 1 1 > 0, >0 a λa Definition of generalized Pareto distribution Let X\θ ~ Gamma α, θ and θ ~ G λ, β . Thus a gamma mixture of a gamma distribution is given by f x = 0 = 0 = θα - θ x α - 1 e x g θ dθ α θ α - θ x α - 1 β λ - βθ λ - 1 e x e θ dθ α λ x α - 1β λ α λ e - x + β θ α + λ - 1 θ dθ 0 Put y = x + β θ θ= y dy and dθ = . Therefore, x +β x +β 97 (6.12) f x= 0 x α - 1β λ - y y e x + β α λ x α - 1β λ = × α λ dy x+β 1 x + β x α - 1β λ × α λ = α + λ -1 -y y α + λ - 1 dy 0 1 x + β e α+λ α+λ ×α+λ Therefore, f x = α+λ × βλ × α λ xα - 1 x + β α+λ ; x>0 Back to equation (6.12), replace x by t, α by n and λ = 1 . Therefore a 1 f x = 1 a n+ ×× 1 a n βa tn -1 n + t + β 1 a ; Further, let β = 1 . Therefore λa 1 f x= 1 a n - 1 t λa × 1 ; n + 1 a n+ 1 a n t > 0, a 1 t + λa 1 1 > 0, n > 0 and > 0. a λa This is generalized Pareto (a Gamma mixture of Gamma) distribution. Note that since f t dt = 1, then 0 f x= 0 0 1 1 a n+ n 1 a 1 a × × λa tn -1 t + 1 λa n + 1 a = n n+ 1 a tn -1 t + n + 1 a =1 1 λa × 1 a 1 1 a λa 98 (6.13) 6.5.2 Mean and Variance of fn t for the Polya Process Initial Condition: X 0 = 0 Mean E Tn = t f n t dt 0 n+ = t× 1 a n+ 1 a n+ 1 a n+ = n 1 λa = 1 tn -1 t + 1 1 a λa dt 1 a n + tn t + 0 n + 1 a dt 1 λa 1 1 a λa t n + 1 - 1 t + 0 n 1 + 1 + - 1 a dt 1 λa 1 1 a λa × n + 1 - 1 1 a n + 1 a 1 × 1 a λ a 1 -1 a n n n = = λ a 1a - 1 λ - λ a a -1 ×1 n λ 1 - a E Tn = 1 λa 1 a n 1 1 a λa × 1 a n = 1 a n 0 = 1 a 0 < a <1 (6.14) Variance Et 2 = t 2 n+ × n 0 = = n+ 1 a n n+ 1 a 1 a n 1 a × 1 a 0 1 1 a λa 1 a tn -1 t + n + dt 1 a 1 λa 1 1 a λa 1 1 a λa 0 tn +1 t + n + 1 a dt 1 λa t n + 2 - 1 t + 1 λa n + 2 + 1 a - 2 dt 99 E t2 = 1 a n+ n× = 1 λa 2 × 1 = × 1 1 a λa 1 a n + 2 × 1a - 2 × 1 n + 1a λ a × n + 1 ×n 1a - 1 1a - 2 n n + 1 λ - λ a λ - 2 λ a = = n + 1×n λ a 1a - 1 λ a 1a - 2 n n + 1 λ 1 - a 1 - 2 a 2 or equivalently for 0 < a < 1 and 0 < a < therefore 0 < a < 1 -2 a for 0 < a < 1 and 0 < 2a < 1. 1 . 2 1 . 2 var Tn = E Tn2 - E Tn 2 n n + 1 n2 = 2 λ 1 - a 1 - 2 a λ 2 1 - a 2 = = = var Tn = var Tn = n n + 11 - a - n 2 1 - 2 a λ 2 1 - a 1 - 2 a 2 n 2 + n 1 - a - n 2 + 2 a n 2 λ 2 1 - a 1 - 2 a 2 n 2 - n 2a + n - n a - n 2 + 2 a n 2 λ 2 1 - a 1 - 2 a 2 a n2 + n - n a λ 1 - a 1 - 2 a 2 2 a n n+ 1 a - 1 λ 1 - a 1 - 2a 2 2 , = n a n+ 1 -a λ 1 - a 1 - 2 a 2 2 n>0 100 and 0<a< 1 2 (6.15) CHAPTER SEVEN SUMMARY AND CONCLUSION 7.1 Summary In this thesis, we derived the Kolmogorov forward and backward differential equations for continuous time non homogenous pure birth process and then solved the forward Kolmogorov differential equation using three alternative approaches. We were able to generate distributions arising from increments from pure birth processes by the applying the three alternative approaches. 7.1.1 General Kolmogorov Forward and Backward Equations The general forward Kolmogorov differential equation is; pk,k + n s, t + λ k +n t pk,k + n s, t = λ k +n -1 t p k,k + n - 1 s, t t (2.11) With initial conditions: pk k s,s = 1 and pk,k + n s,s = 0 for n > 0. The general Backward Kolmogorov differential equation is; pi j s, t = - λi s pi j s, t + λi s pi +1, j s, t s (2.14) 7.1.2 Poisson Process From the Poisson process, we found the distribution of the increments to be a Poisson distribution with parameter λ t - s λ t - s ; n = 0, 1, 2, ... n! n pk,k + n s, t = e - λ t - s (3.6, 4.5) This is independent the initial state and depends on the length of the time interval, thus for a Poisson processes the increments are independent and stationary. The first passage distribution is e- λ t λ t fn t = n - 1! n -1 = λn - λ t n - 1 e t for t > 0; n = 1, 2, 3,..... n (6.2) Which is gamma n, λ The mean of the first passage distribution was found to be E Tn = 101 n λ (6.3) The Variance of the first passage distribution was found to be Var T = n λ2 (6.4) 7.1.3 Simple Birth Process From the Simple birth process, we found the distribution of the increments to be Negative binomial distribution with p = e- λ t - s and q = 1 - e- λ t - s n k + n - 1 - λk t - s 1 - e- λ t - s . pk,k + n s, t = e n (3.15, 4.9, 6.8 ) It depends on the length of the time interval t - s and on k and is thus stationary and not independent. The first passage distribution when n 0 1 is f n t = λ n - 1 e- λ t 1 - e- λ t n-2 ; t > 0, n = 2, 3,... which is an exponentiated exponential distribution with parameters λ and n - 1 The mean of the first passage distribution was found to be E Tn = 1 n -1 1 l j=1 j (7.7) 7.1.4 Simple Birth with Immigration From the Simple birth process with immigration, we found the distribution of the increments to be Negative binomial distribution with p = e-λ t - s and q = 1 - e-λ t - s n k + λv + n -1 - k + λv +n t - s λ -λ t - s pk,k+n s, t = 1-e . e n (3.22, 4.13, 5.9) It depends on the length of the time interval t - s and on k and is thus stationary and not independent. The first passage distribution is fn t = 1 n0λ + v - λ λ ! e - n0 λ + v - λ t 1 - e nλ + v - λ e - λt n - n 0 - λt - n 0λ + v - λ 7.1.5 Polya Process From the Polya process, we found the distribution of the increments to be Negative binomial distribution with p = 1 + λas λa t - s 1 + λas . and q = 1 1 + λat 1 + λat 1 + λat 102 j+ k + pk,k + j s, t = j - 1 1 + λas 1 + λat 1 a k + 1a λa t - s 1 + λat j (3.28, 4.17, 5.10) It depends on the length of the time interval t - s and on k and is thus stationary and not independent. fn t = The n+ 1 a 1 a n × 1 1 a λa first tn -1 t + n + 1 a , passage t > 0, n > 0, 1 λa distribution is 1 1 > 0, > 0 which is a generalized a λa Pareto distribution. E Tn = n λ 1 - a var Tn = a n n+ 1 a - 1 λ 1 - a 1 - 2a 2 2 0 < a <1 , n>0 (6.14) and 0<a< 1 2 (6.15) 7.2 Conclusion In each case of the birth processes, the various approaches lead to same distribution. The distributions that emerged from increments in pure birth process were power series distributions and were all discrete. The first passage distributions that emerged from pure birth process were continuous distributions 7.3 Recommendation for Further Research (1) Determine the distribution emerging from pure birth processes when the birth rate is a distribution function. (2) Determine the distribution emerging from pure birth processes when the birth rate change over certain time intervals (3) Determine the distribution emerging from pure birth processes when the birth rate is a survival function (4) Determine the distribution emerging from pure birth processes when more than one births are allowed over between time t and t + ∆t 103 7.4 References Bhattacharya, R.N. and Waymire, E. (1990) Stochastic Processes with Application. New York John wiley and Sons pp 261-262. Janardan, K.G. (2005), “Integral Representation of a Distribution Associated with a Pure Birth Process”. Communication in Statistics-Theory and Methods, 34:11, 2097-2103. Morgan, B.J.T. (2006), “Four Approaches to Solving Linear Birth - and – Death (and Similar) Processes”. International Journal of Mathematics Education in Science and Technology. vol.10.1 pp. 51-64. Moorsel, V. and Wolter, K (1998), “Numerical solution of non-homogeneous Markov processes through uniformization”. In Proc. of the European Simulation Multiconference Simulation, pages 710 -717. 104
© Copyright 2026 Paperzz