International Journal of Mathematical Analysis
Vol. 10, 2016, no. 13, 639 - 649
HIKARI Ltd, www.m-hikari.com
http://dx.doi.org/10.12988/ijma.2016.6340
Product and Ratio of Macdonald Random Variables
Daya K. Nagar, Edwin Zarrazola and Luz Estela Sánchez
Instituto de Matematicas
Universidad de Antioquia
Calle 67, No. 53108, Medellin, Colombia
c 2016 Daya K. Nagar, Edwin Zarrazola and Luz Estela Sánchez. This article
Copyright is distributed under the Creative Commons Attribution License, which permits unrestricted
use, distribution, and reproduction in any medium, provided the original work is properly
cited.
Abstract
In this article, distributions of the product Y1 Y2 and the ratio Y1 /Y2
are derived when Y1 and Y1 are independent or correlated Macdonald
random variables.
Mathematics Subject Classification: 33E99, 60E05
Keywords: Beta distribution; extended gamma function; gamma distribution; Gauss hypergometric function; Macdonald distribution
1
Introduction
In 1994, Chaudhry and Zubair [1] defined the extended gamma function,
Γ(ν; σ), as
Z ∞
σ
dt,
(1)
Γ(ν; σ) =
tν−1 exp −t −
t
0
where Re(σ) > 0 and ν is any complex number. For Re(ν) > 0 and by taking
σ = 0, it is clear that the above extension of the gamma function reduces to
the classical gamma function, Γ(ν, 0) = Γ(ν). The generalized gamma function
(extended) has been proved very useful in various problems in engineering and
physics, see for example, Chaudhry and Zubair [1–4].
640
Daya K. Nagar, Edwin Zarrazola and Luz Estela Sánchez
By using the definition of the extended gamma function, Chaudhry and
Zubair [2] have introduced a one parameter Macdonald distribution. By making a slight change in the p.d.f. (probability density function) proposed by
them, we define a three parameter Macdonald distribution with the p.d.f. (Nagar, Roldán-Correa and Gupta [10, 11], Nagar, Zarrazola and Sánchez [12]),
σ −β y β−1 Γ(α; y/σ)
,
Γ(β)Γ(α + β)
y > 0,
σ > 0,
β > 0,
α + β > 0.
(2)
We will denote it by Y ∼ M(α, β, σ). If σ = 1 in the density above, then we
will simply write Y ∼ M(α, β).
We propose the bivariate generalization of the Macdonald distribution as
follows.
Definition 1.1. The random variables Y1 and Y2 are said to have a bivariate
Macdonald distribution with parameters α, β1 , β2 and σ, denoted as (Y1 , Y2 )
∼ BM(α, β1 , β2 , σ), if their joint p.d.f. is given by
y1β1 −1 y2β2 −1 Γ (α; (y1 + y2 )/σ)
,
σ β1 +β2 Γ(β1 )Γ(β2 )Γ(α + β1 + β2 )
y1 > 0,
y1 > 0,
(3)
where β1 > 0, β2 > 0, α + β1 + β2 > 0 and σ > 0.
In this article, distributions of the product Y1 Y2 and the ratio Y1 /Y2 are
derived when (i) the random variables Y1 and Y1 are independent each having
univariate Macdonald distribution and (ii) the random variables Y1 and Y1 are
correlated having a bivariate Macdonald distribution. We also derive several
other properties such as marginal and conditional distributions, moments, distribution of sum, etc. for the bivariate Maconald distribution defined by the
density (3).
2
Some Definitions and Preliminary Results
In this section, we give some definitions and preliminary results which are used
in subsequent sections.
The gamma function was first introduced by Leonard Euler in 1729 as the
limit of a discrete expression and later as an absolutely convergent improper
integral,
Z ∞
Γ(ν) =
tν−1 exp(−t) dt, Re(ν) > 0.
(4)
0
The gamma function has many beautiful properties and has been used in
almost all the branches of science and engineering. Replacing t by z/σ, σ > 0,
Product and ratio of Macdonald random variables
in (4), a more general definition of gamma function can be given as
Z ∞
z
−ν
z ν−1 exp −
dz, Re(ν) > 0.
Γ(ν) = σ
σ
0
641
(5)
The extended gamma function is very similar to the modified Bessel function
of type 2. An integral representation of the modified Bessel function type 2
(Gradshteyn and Ryzhik [6, Eq. 3.471.9]) is given by
ν/2Z ∞
√
1 a
b
ν−1
t
exp − at +
dt,
Kν (2 ab) =
2 b
t
0
(6)
where Re(a) > 0 and Re(b) > 0. Comparing (1) and (6), it can easily be seen
that
√
Γ(ν; σ) = 2σ ν/2 Kν (2 σ).
Further, using the relationship between extended gamma function and modified Bessel function of type 2, we can re-write a result given in Erdélyi, Magnus,
Oberhettinger and Tricomi [5, Eq. 7.2.6.40] as
r
n
X
Γ(n + m + 1)
1 z2
z n+1/2 2π
(2z)−m
exp(−z)
, (7)
Γ n+ ;
=
2 4
2
z
Γ(n
−
m
+
1)
m!
m=0
where n is a non-negative integer.
Also, substituting x = σ/t in (1), it can be checked that
Γ(ν; σ) = σ ν Γ(−ν; σ).
(8)
The integral representations of the Gauss hypergeometric function is given
by (Luke [9]),
Z 1
Γ(c)
ta−1 (1 − t)c−a−1 (1 − zt)−b dt,
(9)
2 F1 (a, b; c; z) =
Γ(a)Γ(c − a) 0
where Re(c) > Re(a) > 0 and | arg(1 − z)| < π.
Finally, we define the gamma, beta type 1 and beta type 2 distributions.
These definitions can be found in Johnson, Kotz and Balakrishnan [8].
Definition 2.1. The random variable X is said to have a gamma distribution with parameters θ (> 0), κ (> 0), denoted by X ∼ Ga(κ, θ), if its p.d.f. is
given by
xκ−1 exp (−x/θ)
,
θκ Γ(κ)
x > 0.
642
Daya K. Nagar, Edwin Zarrazola and Luz Estela Sánchez
Note that for θ = 1, the above distribution reduces to a standard gamma
distribution and in this case we write X ∼ Ga(κ).
Definition 2.2. The random variable X is said to have a beta type 1 distribution with parameters (a, b), a > 0, b > 0, denoted as X ∼ B1(a, b), if its
p.d.f. is given by
xa−1 (1 − x)b−1
,
B(a, b)
0 < x < 1,
where B(a, b) is the beta function given by
B(a, b) =
Γ(a)Γ(b)
,
Γ(a + b)
Re(a) > 0,
Re(b) > 0.
Definition 2.3. The random variable X is said to have a beta type 2 distribution with parameters (a, b), denoted as X ∼ B2(a, b), a > 0, b > 0, if its
p.d.f. is given by
xa−1 (1 + x)−(a+b)
,
B(a, b)
x > 0.
The matrix variate generalizations of the gamma, beta type 1 and beta
type 2 distributions have been defined and studied extensively. For example,
see Gupta and Nagar [7].
Theorem 2.1. Let W1 and W2 be independent, W1 ∼ Ga(κ1 , θ1 ) and W2 ∼
Ga(κ2 , θ2 ). Then, W1 W2 ∼ M(κ2 − κ1 , κ1 , θ1 θ2 ).
Proof. The joint p.d.f. of W1 and W2 is given by
w1κ1 −1 w2κ2 −1 exp(−w1 /θ1 − w2 /θ2 )
,
θ1κ1 θ2κ2 Γ(κ1 )Γ(κ2 )
w1 > 0,
w2 > 0.
(10)
Now, transforming Z = W1 W2 and W2 = W2 with the Jacobian J(w1 , w2 →
z, w2 ) = 1/w2 in (10), the joint p.d.f. of Z and W2 is derived as
z κ1 −1 w2κ2 −κ1 −1 exp(−z/θ1 w2 − w2 /θ2 )
,
θ1κ1 θ2κ2 Γ(κ1 )Γ(κ2 )
z > 0,
w2 > 0.
(11)
Finally, integrating w2 in (11) by using (1), we get the desired result.
It is straightforward to show that if W1 and W2 are independent, W1 ∼
Ga(κ1 , θ1 ) and W2 ∼ Ga(κ2 , θ2 ), then
E(W1r W2r )
(θ1 θ2 )r Γ(κ1 + r)Γ(κ2 + r)
=
.
Γ(κ1 )Γ(κ2 )
(12)
643
Product and ratio of Macdonald random variables
3
The Univariate Macdonald Distribution
By using (8) in (2), it is clear that two statements Y ∼ M(α, β, σ) and Y ∼
M(−α, β + α, σ) are equivalent.
From (2), the r-th moment of Y is derived as
E(Y r ) =
σ r Γ(β + r)Γ(α + β + r)
.
Γ(β)Γ(α + β)
(13)
Substituting α = 1/2 in (13) and applying the duplication formula for gamma
function, namely,
22z−1 Γ(z)Γ(z + 1/2)
√
Γ(2z) =
,
π
√
the r-th moment of 2 Y is derived as
√
σ r/2 Γ(2β + r)
E[(2 Y )r ] =
.
Γ(2β)
(14)
p
From the above moment expression it is clear that 2 Y /σ has a standard
gamma distribution with shape parameter
2β. Similarly it can also be shown
p
that if Y ∼ M(−1/2, β, σ), then 2 Y /σ has a standard gamma distribution
with the shape parameter 2β − 1.
If Y1 and Y2 are independent, Y1 ∼ M(α1 , β1 , σ1 ) and Y2 ∼ M(α2 , β2 , σ2 ),
then from the above expression
E(Y1r Y2r ) =
(σ1 σ2 )r Γ(β1 + r)Γ(β2 + r)Γ(α1 + β1 + r)Γ(α2 + β2 + r)
.
Γ(β1 )Γ(β2 )Γ(α1 + β1 )Γ(α2 + β2 )
Specializing α1 , α2 , β1 and β2 , re-writing gamma functions by applying the duplication formula for the gamma function, comparing the resulting expression
with (13), and using Theorem 2.1, we get following results:
√
√
• If α1 = α2 = 1/2, then 4 Y1 Y2 ∼ M(2β2 − 2β1 , 2β1 , σ1 σ2 ).
√
√
• If α1 = α2 = α, β1 = β, β2 = β + 1/2, then 4 Y1 Y2 ∼ M(2α, 2β, σ1 σ2 ).
√
√
• If α1 = 1/2 and α2 = −1/2, then 4 Y1 Y2 ∼ M(2β2 −2β1 −1, 2β1 , σ1 σ2 ).
Theorem 3.1. Let Y1 and Y2 be independent, Y1 ∼ M(α1 , β1 , σ1 ) and Y2 ∼
M(α2 , β2 , σ2 ). Then, the p.d.f. of Z = Y1 /Y2 is given by
(σ2 /σ1 )β1 Γ(β)Γ(α1 + α2 + β)Γ(α1 + β)Γ(α2 + β)
Γ(β1 )Γ(β2 )Γ(α1 + β1 )Γ(α2 + β2 )Γ(α1 + α2 + 2β)
σ2 z
β1 −1
×z
,
2 F1 α2 + β, β; α1 + α2 + 2β; 1 −
σ1
z<
σ1
σ2
(15)
644
Daya K. Nagar, Edwin Zarrazola and Luz Estela Sánchez
and
(σ1 /σ2 )β2 Γ(β)Γ(α1 + α2 + β)Γ(α1 + β)Γ(α2 + β)
Γ(β1 )Γ(β2 )Γ(α1 + β1 )Γ(α2 + β2 )Γ(α1 + α2 + 2β)
σ1
−β2 −1
×z
,
2 F1 α1 + β, β; α1 + α2 + 2β; 1 −
σ2 z
z>
σ1
,
σ2
(16)
where β = β1 + β2 .
Proof. The joint p.d.f. of Y1 and Y2 is given by
y1β1 −1 y2β2 −1 Γ(α1 ; y1 /σ1 )Γ(α2 ; y2 /σ2 )
σ1β1 σ2β2 Γ(β1 )Γ(β2 )Γ(α1 + β1 )Γ(α2 + β2 )
,
y1 > 0,
y2 > 0.
Transforming Z = Y1 /Y2 and Y = Y2 with the Jacobian J(y1 , y2 → z, y) = y
in the above density and integrating y, the marginal p.d.f. of Z is derived as
z β1 −1
σ1β1 σ2β2 Γ(β1 )Γ(β2 )Γ(α1 + β1 )Γ(α2 + β2 )
I(z),
z > 0,
(17)
where
Z
I(z) =
∞
y
0
β−1
yz
Γ α1 ;
σ1
y
Γ α2 ;
dy.
σ2
(18)
Replacing Γ(α1 ; yz/σ1 )Γ(α2 ; y/σ2 ) by their equivalent integral representation,
namely,
Z ∞Z ∞
yz
yz
y
y
α1 −1 α2 −1
Γ α1 ;
u
v
exp −u − v −
−
Γ α2 ;
=
du dv
σ1
σ2
σ1 u σ2 v
0
0
in (18) and integrating y, we get
Z ∞ Z ∞ α1 +β−1 α2 +β−1
u
v
exp(−u − v)
β
du dv.
I(z) = Γ(β)(σ1 σ2 )
(σ2 vz + σ1 u)β
0
0
(19)
For 0 < z < σ1 /σ2 , substituting w = u + v and t = v/(u + v) with the Jacobian
J(u, v → w, t) = w in (19) and integrating w, we get
Z 1 α2 +β−1
t
(1 − t)α1 +β−1
β
I(z) = σ2 Γ(β)Γ(α1 + α2 + β)
dt
β
0 [1 − {1 − σ2 z/σ1 }t]
=
σ2β Γ(β)Γ(α1 + α2 + β)Γ(α1 + β)Γ(α2 + β)
Γ(α1 + α2 + 2β)
σ2 z
× 2 F1 α2 + β, β; α1 + α2 + 2β; 1 −
,
σ1
z<
σ1
,
σ2
(20)
645
Product and ratio of Macdonald random variables
where we have used the integral representation of the Gauss hypergeometric
function given in (9). Similarly, for σ1 /σ2 < z < ∞, we use the substitution
w = u + v and x = u/(u + v) with the Jacobian J(u, v → x, w) = w in (19)
and integrate w, to get
I(z) =
σ1β Γ(β)Γ(α1
+ α2 + β)z
−β
Z
0
=
1
xα1 +β−1
(1 − x)α2 +β−1
[1 − {1 − σ1 /σ2 z}t]β
σ1β Γ(β)Γ(α1 + α2 + β)Γ(α1 + β)Γ(α2 + β)
Γ(α1 + α2 + 2β)
σ1
−β
× z 2 F1 α1 + β, β; α1 + α2 + 2β; 1 −
,
σ2 z
z>
σ1
.
σ2
dx
(21)
Now, substituting (20) and (21) in (17), we get the desired result.
4
The Bivariate Macdonald Distribution
In this section, we derive marginal and conditional distributions, distributions
of the product, quotient, and sum of random variables jointly distributed as
bivariate Macdonald.
Theorem 4.1. If (Y1 , Y2 ) ∼ BM(α, β1 , β2 , σ), then Y1 ∼ M(α + β2 , β1 , σ)
and Y2 ∼ M(α + β1 , β2 , σ).
Proof. Replacing Γ (α; (y1 + y2 )/σ) by its equivalent integral representation,
namely,
Z ∞
y1 + y2
y1 + y2
α−1
u
exp −u −
=
du
Γ α;
σ
σu
0
in (3) and integrating y2 , the marginal p.d.f. of Y1 is derived as
Z ∞
y1β1 −1
y1 α−1
u
exp
−u
−
σ β1 +β2 Γ(β1 )Γ(β2 )Γ(α + β1 + β2 ) 0
σu
Z ∞
y 2
×
y2β2 −1 exp −
dy2 du
σu
0
Z ∞
y1β1 −1
y1 uα+β2 −1 exp −u −
du.
= β1
σ Γ(β1 )Γ(α + β1 + β2 ) 0
σu
Now, evaluating the above integral by using the definition of the extended
gamma function, we get the desired result.
646
Daya K. Nagar, Edwin Zarrazola and Luz Estela Sánchez
Using results of the above theorem, the conditional density of Y1 given
Y2 = y2 is derived as
y1β1 −1 Γ (α; (y1 + y2 )/σ)
,
σ β1 Γ(β1 )Γ (α + β1 ; y2 /σ)
y1 > 0
and the conditional density of Y2 given Y1 = y1 is obtained as
y2β2 −1 Γ (α; (y1 + y2 )/σ)
,
σ β2 Γ(β2 )Γ (α + β2 ; y1 /σ)
y2 > 0.
Theorem 4.2. If (Y1 , Y2 ) ∼ BM(α, β1 , β2 , σ), then Y1 + Y2 and Y1 /(Y1 + Y2 )
are independent, Y1 + Y2 ∼ M(α, β1 + β2 , σ) and Y1 /(Y1 + Y2 ) ∼ B1(β1 , β2 ).
Proof. Transforming S = Y1 + Y2 and Z = Y1 /(Y1 + Y2 ) with the Jacobian
J(y1 , y2 → z, s) = s in (3), it is easy to see that S and Z are independent,
S ∼ M(α, β1 + β2 , σ) and Z follows a beta type 1 distribution with parameters
β1 and β2 .
Corollary 4.2.1. If (Y1 , Y2 ) ∼ BM(α, β1 , β2 , σ), then Y1 /Y2 ∼ B2(β1 , β2 ).
By definition, the product moments of Y1 and Y2 associated with (3) are
given by
Z ∞ Z ∞ β1 +r−1 β2 +s−1
y1
y2
Γ (α; (y1 + y2 )/σ)
r s
dy1 dy2
E(Y1 Y2 ) =
β
+β
1
2
σ
Γ(β1 )Γ(β2 )Γ(α + β1 + β2 )
0
0
σ r+s Γ(β1 + r)Γ(β2 + s)Γ(α + β1 + β2 + r + s)
=
.
(22)
Γ(β1 )Γ(β2 )Γ(α + β1 + β2 )
Substituting appropriately in (22), means and variances of Y1 and Y2 and
the covariance between Y1 and Y2 are computed as
E(Yi ) = σβi (α + β1 + β2 ),
E(Yi2 ) = σ 2 βi (βi + 1)(α + β1 + β2 )(α + β1 + β2 + 1),
Var(Yi ) = σ 2 βi (α + β1 + β2 )(α + β1 + β2 + βi + 1),
E(Y1 Y2 ) = σ 2 β1 β2 (α + β1 + β2 )(α + β1 + β2 + 1),
and
Cov(Y1 , Y2 ) = σ 2 β1 β2 (α + β1 + β2 ).
The correlation coefficient between Y1 and Y2 is given by
s
β1 β2
ρY1 ,Y2 =
.
(α + 2β1 + β2 + 1)(α + β1 + 2β2 + 1)
647
Product and ratio of Macdonald random variables
Further, for β1 = β2 = β, the correlation coefficient between Y1 and Y2 is give
by
β
.
ρY1 ,Y2 =
α + 3β + 1
Substituting s = r, β1 = β, β√
2 = β + 1/2 in (22) and using the duplication
formula, the r-th moment of 2 Y1 Y2 is obtained as
p
σ r Γ(2β + r)Γ(α + 2β + 1/2 + r)
r
E[(2 Y1 Y2 ) ] =
.
Γ(2β)Γ(α + 2β + 1/2)
Now, comparing the above moment expression
√ with (13) we can conclude that
if (Y1 , Y2 ) ∼ BM(α, β, β + 1/2, σ), then 2 Y1 Y2 ∼ M(α + 1/2, 2β, σ). For
s = −r, the above expression reduces to
E(Y1r Y2−r ) =
Γ(β1 + r)Γ(β2 − r)
,
Γ(β1 )Γ(β2 )
(23)
which shows that Y1 /Y2 has a standard beta type 2 distribution with parameters β1 and β2 .
√
In the next theorem we derive the distribution of 2 Y1 Y2 using transformation of variables.
√
Theorem 4.3. If (Y1 , Y2 ) ∼ BM(α, β1 , β2 , σ), then the p.d.f. of Z = 2 Y1 Y2
is given by
(z/2)2β1 −1
σ 2β1 Γ(β1 )Γ(β2 )Γ(α + β1 + β2 )
Z ∞
α+β2 −β1 −1
u
exp(−u)Γ β2 − β1 ;
×
0
z2
4σ 2 u2
du,
z > 0.
(24)
Proof. Replacing Γ (α, (y1 + y2 )/σ) by its equivalent integral representation,
namely,
Z ∞
y1 + y2
y1 + y2
α−1
=
u
exp −u −
du
Γ α,
σ
σu
0
√
in (3) and making the transformation Z = 2 Y1 Y2 and Y = Y2 with the
Jacobian J(y1 , y2 → z, y) = z/2y, the joint p.d.f. of Z and Y is derived as
Z ∞
1
z2
(z/2)2β1 −1 y β2 −β1 −1
α−1
u
exp −u −
y+
du.
σ β1 +β2 Γ(β1 )Γ(β2 )Γ(α + β1 + β2 ) 0
σu
4y
Now, integrating y by using the definition of extended gamma function, we get
the desired result.
648
Daya K. Nagar, Edwin Zarrazola and Luz Estela Sánchez
Corollary 4.3.1. Let (Y1 , Y2 ) ∼ BM(α, β, β
√ + n + 1/2, σ), where n is a
non-negative integer. Then, the p.d.f. of Z = 2 Y1 Y2 is given by
√
π(z/2)2β+n−1
σ 2β+n Γ(β)Γ(β + n + 1/2)Γ(α + 2β + n + 1/2)
n X
1 z
σ m Γ(n + m + 1)
×
Γ α+m+ ;
, z > 0.
2z
Γ(n
−
m
+
1)
m!
2
σ
m=0
Proof. Using (7) we can write
n z X
√ z n
1 z2
σu m Γ(n + m + 1)
Γ n+ ; 2 2 = π
.
exp −
2 4σ u
2σu
σu m=0 2z
Γ(n − m + 1) m!
Further,
Z ∞
1 z2
α+n+1/2−1
u
exp(−u)Γ n + ; 2 2 du
2 4σ u
0
Z ∞
n
√ z n X σ m Γ(n + m + 1)
z uα+m+1/2−1 exp −u −
du
= π
2σ m=0 2z Γ(n − m + 1) m! 0
σu
n √ z n X
σ m Γ(n + m + 1)
1 z
= π
Γ α+m+ ;
.
(25)
2σ m=0 2z Γ(n − m + 1) m!
2 σ
Now, replacing β1 and β2 by β and β + n + 1/2, respectively, in (24) and using
(25), we get the desired result.
Acknowledgements. The research work of DKN and LES was supported
by the Sistema Universitario de Investigación, Universidad de Antioquia under
the project no. IN10231CE.
References
[1] M. A. Chaudhry and S. M. Zubair, Generalized incomplete gamma
functions with applications,
Journal of Computational and Applied Mathematics, 55 (1994), 99-123. http://dx.doi.org/10.1016/03770427(94)90187-2
[2] M. Aslam Chaudhry and Syed M. Zubair, Extended gamma and digamma
functions, Fractional Calculus and Applied Analysis, 4 (2001), no. 3, 303324.
[3] M. Aslam Chaudhry and Syed M. Zubair, Extended incomplete gamma
functions with applications, Journal of Mathematical Analysis and Applications, 274 (2002), no. 2, 725-745. http://dx.doi.org/10.1016/s0022247x(02)00354-2
Product and ratio of Macdonald random variables
649
[4] M. Aslam Chaudhry and Syed M. Zubair, On an extension of generalized
incomplete gamma functions with applications, Journal of the Australian
Mathematical Society, Series B. Applied Mathematics, 37 (1996), no. 3,
392-405. http://dx.doi.org/10.1017/s0334270000010730
[5] A. Erdélyi, W. Magnus, F. Oberhettinger and F. G. Tricomi, Higher Transcendental Functions, Vol. I, II, Based, in part, on notes left by Harry
Bateman, McGraw-Hill Book Company, Inc., New York-Toronto-London,
1953.
[6] I. S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series, and Products, Translated from the Russian, Sixth edition, Translation edited and
with a preface by Alan Jeffrey and Daniel Zwillinger, Academic Press,
Inc., San Diego, CA, 2000.
[7] A. K. Gupta and D. K. Nagar, Matrix Variate Distributions, Chapman
& Hall/CRC, Boca Raton, 2000.
[8] N. L. Johnson, S. Kotz and N. Balakrishnan, Continuous Univariate
Distributions-2 , Second Edition, John Wiley & Sons, New York, 1994.
[9] Y. L. Luke, The Special Functions and Their Approximations, Vol. 1,
Academic Press, New York, 1969.
[10] Daya K. Nagar, Alejandro Roldán-Correa and Arjun K. Gupta, Extended
matrix variate gamma and beta functions, Journal of Multivariate Analysis, 122 (2013), 53-69. http://dx.doi.org/10.1016/j.jmva.2013.07.001
[11] Daya K. Nagar, Alejandro Roldán-Correa and Arjun K. Gupta, Matrix
variate Macdonald distribution, Communications in Statistics - Theory
and Methods, 45 (2016), no. 5, 1311-1328.
http://dx.doi.org/10.1080/03610926.2013.861494
[12] Daya K. Nagar, Edwin Zarrazola and Luz Estela Sánchez, A bivariate distribution whose marginal laws are gamma and Macdonald, International
Journal of Mathematical Analysis, 10 (2016), no. 10, 455-467.
http://dx.doi.org/10.12988/ijma.2016.6219
Received: March 27, 2016; Published: May 11, 2016
© Copyright 2026 Paperzz