Partially Independent Random Variables

saqarT velos
mecn ierebaTa
erovnu li
ak ad em iis
moam be,
t. 6, #1, 20 12
BULL ET IN OF THE GEORGIAN NATIONAL ACADEM Y OF SCIENCE S, vol. 6, no. 1, 2012
Mathematics
Partially Independent Random Variables
Omar Glonti
I. Javakhishvili Tbilisi State University
(Presented by Member of the Academy Elizbar Nadaraya)
ABSTRACT. In this paper the definition of A- independence of X and Y random variables is introduced
and the example of A-independent random variables is constructed. Regression of X on Y and regression
of Y on X are investigated. Also the joint characteristic function of this random variables is obtained.
© 2012 Bull. Georg. Natl. Acad. Sci.
Key words: random variables, A-independence,regression, characteristic function.
Introduction. One of the important and fundamental problems of probability theory is independence of
random variables. In this paper we introduce the definition of partial independence of two random variables.
Using the standard bivariate normal distribution density we construct a nontrivial example of joint probability
distribution density for such partially independent random variables.We investigate the properties of this
distribution, find the conditional probability distribution density and calculated regressions. Also we give
the expression for characteristic function of this joint probability distribution.
1. A-independent random variables.
Definition. We say that real random variables X and Y on the probability space (, F , P ) are A-independent
(A is the subset of R 2 ) if and only if FXY ( x, y )  Fx ( x ) FY ( y ) for all ( x, y )  Α , where
FXY ( x, y )  P ( X  x, Y  y ) is the joint probability distribution function of X and Y, FX (x ) and FY (y ) are
the probability distribution functions of X and Y respectively.
It is clear that independence in the usual sense (see definition, for examples, in [1]) of random variables X
2
and Y coincides with Α  R -independence.
If there exists the joint probability distribution density f XY ( x, y ) , then we say that X and Y are Aindependent if and only if f XY ( x, y )  f X ( x) f y ( y ) for all ( x, y )  Α , where f X (x ) and f y ( y ) are the
probability distribution densities of X and Y respectively.
We begin to construct a special example of A-independent random variables using the joint standard
normal distribution density
© 2012 Bull. Georg. Natl. Acad. Sci.
24
Omar Glonti
f ( x, y ) 
1
2 1   2
exp{
( x 2  y 2  2 xy )
2(1   2 )
}, /  /  1.
It is known (see, for example [2]) that f ( x, y )  f ( x) f ( y / x )  f ( y ) f ( x / y ),
where f (x) and f ( y ) are the standard normal distribution and
f ( x / y) 
1
2 (1   2 )
exp{
( x  y ) 2
2(1   2 )
}, f ( y / x ) 
1
2 (1   2 )
exp{
( y  x ) 2
2(1   2 )
}.
Let
    {( x, y )  R 2 : x  0, y  0},     {( x, y )  R 2 : x  0, y  0},
    {( x, y )  R 2 : x  0, y  0},    {( x, y )  R 2 : x  0, y  0},
and
g ( x, y )  C2 I    ( x, y ) f ( x, y )  C2 I   ( x, y ) f ( x, y )  I    ( x, y )u  ( x ) f ( x )u  ( y ) f ( y ) 
I   ( x, y )u  ( x ) f ( x )u  ( y ) f ( y ),
(1)
Where I  ( x, y ) is the indicator of A and
u  ( x)  
u  ( x)  


1

2C

0
 f ( x / y)dy, if x  0 and u  ( x)  0, if x  0 ,
1
2C
(2)
0

 f ( x / y)dy, if x  0 and u  ( x)  0, if x  0 .
(3)

Here

0
    f ( x, y )dxdy 
00
0
  f ( x, y)dxdy
(4)
  
(the values of    (  ) can be found from tables of standard bivariate normal distribution, see, for example,
[3]-[5]);
C and C are some constants.
Denote

0
A   u  ( x) f ( x )dx, A 
0
 u ( x) f ( x)dx.
(5)

After substitution u  (x) and u  (x ) from (2) and (3) in ( 5) we obtain
1
1
A  C 2 and A  C  2 .
Let us choose C  and C  in such a way that C   0, C  0 and
Bull. Georg. Natl. Acad. Sci., vol. 6, no. 1, 2012
(6)
Partially Independent Random Variables
25
C   C  
1
(for example, C   C 
2
or C  
1
2
1
, C 
1
2
3
2
3
1
2

1
2
). Then from (6):
A  A  1
(7)
g ( x)  u ( x ) f ( x),
(8)
u ( x ) if x  0,
u ( x)   
u ( x) if x  0,
(9)
and if
where
we have


0

0
 g ( x)dx   u( x) f ( x)dx   u( x) f ( x)dx   u( x) f ( x)dx 



0

 u ( x) f ( x)dx   u ( x) f ( x)dx  A  A  1.

0
Therefore g (x ) is a probability distribution density..
Now we can verify that
 
  g ( x, y)dxdy  1,
 
where g ( x, y ) is defined by (1).
Really, from (1)-(4)
 
2
2
2
2
  g ( x, y)dxdy  (C C )  2CC   (C  C  2CC )   (C  C )
2
 1.
 
We show that marginal distribution densities of g ( x, y ) are g ( x)  u ( x) f ( x ) and g ( y )  u ( y ) f ( y ) :








0
g ( x, y )dy  I [ 0,) ( x ) g ( x, y )dy  I (  ,0 ) ( x) g ( x, y )dy 


I[ 0, ) ( x )[C 2 f ( x ) f ( y / x)dy  u  ( x ) f ( x) A ]  I ( , 0) ( x )[C 2 f ( x )

I[ 0, ) ( x )[C 
0
1
2u
 f ( y / x)dy  u ( x) f ( x) A ] 

1
 ( x)
f ( x )  u  ( x ) f ( x) A ]  I ( , 0) ( x )[C  2 u  ( x) f ( x )  u  ( x) f ( x ) A ] 
1
1
I[ 0, ) ( x)[u  ( x) f ( x )(C  2  A )]  I (  ,0 ) ( x )[u  ( x ) f ( x)(C  2  A )] 
I[ 0, ) ( x)u  ( x) f ( x)  I (  ,0 ) ( x)u  ( x ) f ( x)  g ( x).
Similarly
Bull. Georg. Natl. Acad. Sci., vol. 6, no. 1, 2012
26
Omar Glonti

 g ( x, y )dx  g ( y).

From (1)-(3), (8), (9) it is clear that g ( x, y )  g ( x ) g ( y ) on        and really
g ( x, y )  C 2 I   ( x, y ) f ( x, y )  C 2 I   ( x, y ) f ( x, y )  I   ( x, y ) g ( x ) g ( y )  I   ( x, y ) g ( x) g ( y ),
Thus we have proved the folloving
Theorem. The real function g ( x, y ) defined on R 2 by (1) is the probability distribution density with
marginal distribution densities g (x ) and g ( y ) of same form, defined from (8).The random variables X and
Y with this joint distribution density f XY ( x, y )  g ( x, y ) are Α   Α   independent.
3. Regression.
Suppose the random variables X and Y are Α   Α   independent and have the joint probability distribution
density f XY ( x, y )  g ( x, y ) , where g ( x, y ) is defined by (1). It is not difficult to obtain the conditional
density
f X ( x / Y  y) 
g ( x, y)
f ( x, y )
f ( x, y )
 C2 I  ( x, y)
 C2 I  ( x, y)
 I  ( x, y)u ( x) f ( x) 
g ( y)
u ( y) f ( y)
u ( y) f ( y)
I  ( x, y)u ( x) f ( x)  C2 I  ( x, y)
f ( x / y)
f ( x, y )
 C2 I  ( x, y)
 I  ( x, y)u ( x) f ( x) 
u ( y)
u ( y)
I  ( x, y)u ( x) f ( x)  I  ( x, y)C
1
2
f ( x / y)


 I  ( x, y)C
f (u / y)du


f (u / y)du


1
2
f ( x / y)
0

0
I  ( x, y)C
1
2
f ( x) f (u / x)du  I  ( x, y)C

1
2
0
f ( x)  f (u / x)du.
0

So
1
f X ( x / Y  y )  I  ( x, y )C 2
1
f ( x / y)
 f (u / y)du

1
2
0

0
I  ( x, y )C
f ( x / y)
 I  ( x, y )C 2


f (u / y )du


f ( x ) f (u / x )du  I  ( x, y )C
0

1
2
0
f ( x )  f (u / x)du
(10)

and
1
f Y ( y / X  x)  I   ( x, y )C  2
f ( y / x)

1
f ( y / x)
 I   ( x, y )C  2
0
 f (u / x)du
 f (u / x)du
0
I    ( x , y ) C 

1
2

f ( y )  f (u / y )du  I   ( x, y )C 
0
Bull. Georg. Natl. Acad. Sci., vol. 6, no. 1, 2012



1
2
0
f ( y )  f (u / y )du.

(11)
Partially Independent Random Variables
27
From these expressions we see that on the set        the conditional distribution density
f X ( x / Y  y ) is the function only of x and f Y ( y / X  x ) is the function only of y. It is natural because x
and y are independent on this set .
Using (10) and (11) we find regression of X on Y and Y on X.
Regression of X on Y:

 x f ( x / y)dx

 xf X ( x / Y  y)dx 
E ( X / Y  y) 
1
I [ 0,) ( y )C 2 0


 f (u / y)du
0
0
 x f ( x / y)dx
I (  ,0) ( y )C 
1
2 
0
 I [ 0 ,  ) ( y ) C 


1
2 xf



( x )( f (u / x )du )dx 
0
f (u / y )du
0

I (  , 0) ( y )C 

1 0
2
0
 x f ( x)(  f (u / x)du)dx.

(12)

Note that here

 f (u / y)du 
0
2 (1  
0



1
f (u / y )du  (
2
 y
1  2
e
)
 uf (u / y)du  (1  


1
du 

1
2 (1   2 )
0
2
2(1  2 )
2


e

u2
2
du  1   (
y
 y
1 2
),
1  2
),
2
0
(u   y )2
0

 uf (u / y)du  (1   )


1
)
e
2
e
 2 y2
2(1  2 )

 y 
  y 1   (
),

1   2 

 2 y2
2(1  2 )
2 (1   )
  y(
 y
1  2
(13)
),
Denote
K   C 

1
2 xf

1 0
2

0
K   C
Bull. Georg. Natl. Acad. Sci., vol. 6, no. 1, 2012
0
0
 x f ( x)(  f (u / x)du )dx

It is clear that

( x )(  f (u / x )du )dx,

(14)
28
Omar Glonti

0
K   K  (  )   xg ( x)dx,K   K  (  ) 
 xg ( x)dx,
0

and

 xg ( x)dx EX .
K  K  

Using (13) and (14) from (12) we obtain:
E ( X / Y  y )  I [ 0 ,  ) ( y )C 
1
 2 [(  2

 y
y 1  (

1  2

I ( , 0) ( y )C
1
2 [(1   2 ))
2 (1   2 )
2 (1   2 )
e
 2 y2
2 (1  2 )


 y 1
) ][1   (
)] 
2

1




1
 1))

1
e
 2y2
2(1  2 )
 y(
 y
1  2
)][(
 y
1  2
(15)
)]1 
I ( , 0) ( y ) K   I[ 0, ) ( y ) K  .
Similarly
E (Y / X  x) 
1
I[0,) ( x)C 2 [( 2
1
I ( ,0) ( x)C 2 [(1   2 ))
1))

1
2
e

1
2 (1   2 )
 2x2
2(1  2 )
2 (1   )
e
 y(
 2 x2
2(1  2 )

 x 
 x 1
 y1  (
) ][1  (
)] 
2

1   
1  2

 x
1 
2
)][(
 x
1 
2
(16)
)]1  I ( ,0) K   I[ 0,) ( x)K  .
Representations (15) and (16) show that regression of X on Y and regression of Y on X are not linear.
Remark. Denote
 f (x / y)
,

 f (u / y )du
f ( x / y)   
0
0,
x  0,
(17)
x0
and
 f ( x / y)
,
0
 f (u / y )du
f ( x / y )   
 
0,
Bull. Georg. Natl. Acad. Sci., vol. 6, no. 1, 2012
x  0,
(18)
x  0.
Partially Independent Random Variables
29
Then we can rewrite (10) in the following form
1
1
f X ( x / Y  y )  I   ( x, y )C  2 f  ( x / y )  I   ( x, y )C 2 f  ( x / y ) 
I   ( x, y )C

1
2

f ( x) f (u / x)du  I   ( x, y )C


1
2
0
f ( x)
0
 f (u / x)du.
(19)

Note that f  ( x / y ) and f  ( x / y ) defined by (17) and (18) are conditional densities.
3. Joint characteristic function.
Let
C   C 
1
2
g * ( x, y ) 
1
2
, then from (1):
1
1
I  ( x, y ) f ( x, y ) 
I  ( x, y ) f ( x, y )  I   ( x, y ) g * ( x ) g * ( y )  I   ( x, y ) g * ( x ) g * ( y ), (20)
4  
4 
1
where g * ( x) and g * ( y) are defined by (8), (9), (2) and (3) with C   C  
2
1
2
.
Denote

0
   g * ( x )dx,  
0
 g * ( x)dx
(     1) .
(21)

And rewrite (20) in the following form
g * ( x, y )  I  ( x, y )
1 f ( x, y )
1 f ( x, y )
g * ( x) g * ( y )
 I  ( x, y )
 I  ( x, y ) 

4 
4 


I  ( x, y )
g * ( x) g * ( y)
.


(22)
Note that here

    f ( x, y )dxdy 
00
0
0
  f ( x, y)dxdy .
  
Let the random variables U and V defined on (, F , P ) with values in        have the joint
probability distribution density
f ( x, y )
, ( x, y )        ,
2
fUV ( x, y )  0,
( x, y )       .
fUV ( x, y ) 
(23)
Assume that the random variables  and  with values in [0, ) and in (,0) , respectively, have the
Bull. Georg. Natl. Acad. Sci., vol. 6, no. 1, 2012
30
Omar Glonti
probability distribution densities :
g * ( x)
, x  0,

f  ( x )  0, x  0
f ( x) 
(24)
and
g * ( x)
,

f ( x )  0,
f ( x ) 
x  0,
(25)
x  0.
Then, it is clear that the joint characteristic function corresponding to g * ( x, y) has the form
 
 ( z1 , z2 ) 
 e
 i ( z1 x  z2 y )
 
1
g * ( x, y )dxdy  UV ( z1 , z2 )  [ ( z1 ) ( z2 )   ( z2 ) ( z1 )],
2
(26)
where UV ( z1 , z 2 ) is the characteristic function corresponding to joint density (23),  (z ) and  (z ) are
the characteristic functions corresponding to densities (24) and (25) respectively.
Remark 2. If in (26) z1  0 and z 2  0 , we obtain  
1
. Really in this case, when C   C  
4
1
2
have:


 
   g * ( x )dx   u * ( x) f ( x)dx 
0
0
1
1
( f ( y / x ))dy ) f ( x)dx 
2 0 0
2

1
  f ( x, y)dxdy  2
00
and

0
0
*
 g ( x)dx 
*
 u  ( x) f ( x)dx 

Therefore    

1
2
1
1
and   .
2
4
Bull. Georg. Natl. Acad. Sci., vol. 6, no. 1, 2012
0

0
(  f ( y / x ))dy ) f ( x)dx 
 
1
2
0
0
 
 
f ( x, y )dxdy 
1
.
2
1
2
we
Partially Independent Random Variables
31
maTematika
nawilobriv damoukidebuli SemTxveviTi sidideebi
o. Rlonti
i. javaxiSvilis sax. Tbilisis saxelmwifo universiteti
(warmodgenilia akademiis wevris e. nadaraias mier)
statiaSi gansazRvrulia X da Y SemTxveviTi sidideebis A-damoukideblobis cneba da
A-damoukidebeli SemTxveviTi sidideebis magaliTi aris agebuli. ganxilulia regresia Xsa Y-ze da Y-sa X-ze. napovnia aseTi SemTxveviTi sidideebis maxasiaTebeli funqciis saxe.
REFERENCES
A.N. Shiryaev (1980), Veroyatnost’, M. (in Russian).
M.G. Kendall, A. Stuart (1966), Teoriya raspredelenii. M. (in Russian).
D.B. Owen (1966), Sbornik statisticheskikh tablits, M. (in Russian).
M. Jantaravareerat (1998), Approximation of the Distribution Function for the Standard Bivariate Normal. Ph.
D. Thesis. Illionis Institute of Technology, Chicago, IL.
5. M. Jantaravareerat, N. Thomopoulos (1998), Computing Sciences and Statistics, 30: 170-184.
1.
2.
3.
4.
Received September, 2011
Bull. Georg. Natl. Acad. Sci., vol. 6, no. 1, 2012