Uniform Correlation Mixture of
Multivariate Normal Distributions
Kai Zhang
Department of Statistics and Operations Research
UNC Chapel Hill
(Joint work with Brown, George, and Zhao)
June 10, 2014
Bivariate Normal Distribution
Consider a random vector X = (X1 , X2 )T that follows the bivariate normal
distribution with zero means, unit variances and correlation ρ ∈ (−1, 1):
X1
0
1 ρ
∼N
,
.
X2
0
ρ 1
Kai Zhang (UNC Chapel Hill)
June 10, 2014
2 / 20
Bivariate Normal Distribution
Consider a random vector X = (X1 , X2 )T that follows the bivariate normal
distribution with zero means, unit variances and correlation ρ ∈ (−1, 1):
X1
0
1 ρ
∼N
,
.
X2
0
ρ 1
The probability density function of X = (X1 , X2 )T is well-known:
2
x1 + x22 − 2ρx1 x2
1
exp −
f (x1 , x2 |ρ) = p
.
2(1 − ρ2 )
2π 1 − ρ2
Kai Zhang (UNC Chapel Hill)
June 10, 2014
2 / 20
Bivariate Normal Distribution
Consider a random vector X = (X1 , X2 )T that follows the bivariate normal
distribution with zero means, unit variances and correlation ρ ∈ (−1, 1):
X1
0
1 ρ
∼N
,
.
X2
0
ρ 1
The probability density function of X = (X1 , X2 )T is well-known:
2
x1 + x22 − 2ρx1 x2
1
exp −
f (x1 , x2 |ρ) = p
.
2(1 − ρ2 )
2π 1 − ρ2
How about integrating it?
Kai Zhang (UNC Chapel Hill)
June 10, 2014
2 / 20
Integrating f (x1 , x2 |ρ)
I believe everyone knows the answer to this integral
2
Z ∞
1
x1 + x22 − 2ρx1 x2
p
dx1 =?
exp −
2(1 − ρ2 )
−∞ 2π 1 − ρ2
Kai Zhang (UNC Chapel Hill)
June 10, 2014
3 / 20
Integrating f (x1 , x2 |ρ)
I believe everyone knows the answer to this integral
2
Z ∞
1
1
x1 + x22 − 2ρx1 x2
2
p
dx1 = √ e −x2 /2
exp −
2
2
2(1 − ρ )
2π
−∞ 2π 1 − ρ
Kai Zhang (UNC Chapel Hill)
June 10, 2014
3 / 20
Integrating f (x1 , x2 |ρ)
I believe everyone knows the answer to this integral
2
Z ∞
1
1
x1 + x22 − 2ρx1 x2
2
p
dx1 = √ e −x2 /2
exp −
2
2
2(1 − ρ )
2π
−∞ 2π 1 − ρ
and this
Z ∞Z
−∞
∞
−∞
x 2 + x22 − 2ρx1 x2
p
exp − 1
2(1 − ρ2 )
2π 1 − ρ2
Kai Zhang (UNC Chapel Hill)
1
dx1 dx2 =?
June 10, 2014
3 / 20
Integrating f (x1 , x2 |ρ)
I believe everyone knows the answer to this integral
2
Z ∞
1
1
x1 + x22 − 2ρx1 x2
2
p
dx1 = √ e −x2 /2
exp −
2
2
2(1 − ρ )
2π
−∞ 2π 1 − ρ
and this
Z ∞Z
−∞
∞
−∞
x 2 + x22 − 2ρx1 x2
p
exp − 1
2(1 − ρ2 )
2π 1 − ρ2
Kai Zhang (UNC Chapel Hill)
1
dx1 dx2 = 1
June 10, 2014
3 / 20
Integrating f (x1 , x2 |ρ)
I believe everyone knows the answer to this integral
2
Z ∞
1
1
x1 + x22 − 2ρx1 x2
2
p
dx1 = √ e −x2 /2
exp −
2
2
2(1 − ρ )
2π
−∞ 2π 1 − ρ
and this
Z ∞Z
−∞
∞
−∞
x 2 + x22 − 2ρx1 x2
p
exp − 1
2(1 − ρ2 )
2π 1 − ρ2
1
dx1 dx2 = 1
Now how about this?
2
Z 1
1
x1 + x22 − 2ρx1 x2
p
exp −
dρ = ???
2(1 − ρ2 )
−1 2π 1 − ρ2
Kai Zhang (UNC Chapel Hill)
June 10, 2014
3 / 20
The Uniform Correlation Mixture Integral
Let’s think about this integral
2
Z 1
1
x1 + x22 − 2ρx1 x2
p
exp −
dρ.
2(1 − ρ2 )
−1 2π 1 − ρ2
Kai Zhang (UNC Chapel Hill)
June 10, 2014
4 / 20
The Uniform Correlation Mixture Integral
Let’s think about this integral
2
Z 1
1
x1 + x22 − 2ρx1 x2
p
exp −
dρ.
2(1 − ρ2 )
−1 2π 1 − ρ2
This integral is finite for any x1 and x2 .
Kai Zhang (UNC Chapel Hill)
June 10, 2014
4 / 20
The Uniform Correlation Mixture Integral
Let’s think about this integral
2
Z 1
1
x1 + x22 − 2ρx1 x2
p
exp −
dρ.
2(1 − ρ2 )
−1 2π 1 − ρ2
This integral is finite for any x1 and x2 .
Consider
1
f (x1 , x2 ) =
2
Z
1
−1
x 2 + x22 − 2ρx1 x2
p
exp − 1
2(1 − ρ2 )
2π 1 − ρ2
1
dρ.
f (x1 , x2 ) is the marginal density of (X1 , X2 )T if ρ ∼ Uniform[−1, 1].
Kai Zhang (UNC Chapel Hill)
June 10, 2014
4 / 20
The Uniform Correlation Mixture Integral
Let’s think about this integral
2
Z 1
1
x1 + x22 − 2ρx1 x2
p
exp −
dρ.
2(1 − ρ2 )
−1 2π 1 − ρ2
This integral is finite for any x1 and x2 .
Consider
1
f (x1 , x2 ) =
2
Z
1
−1
x 2 + x22 − 2ρx1 x2
p
exp − 1
2(1 − ρ2 )
2π 1 − ρ2
1
dρ.
f (x1 , x2 ) is the marginal density of (X1 , X2 )T if ρ ∼ Uniform[−1, 1].
R∞
R
R∞
−x22 /2 and ∞
√1
−∞ f (x1 , x2 )dx1 = 2π e
−∞ −∞ f (x1 , x2 )dx1 dx2 = 1.
Kai Zhang (UNC Chapel Hill)
June 10, 2014
4 / 20
The Uniform Correlation Mixture Integral
Let’s think about this integral
2
Z 1
1
x1 + x22 − 2ρx1 x2
p
exp −
dρ.
2(1 − ρ2 )
−1 2π 1 − ρ2
This integral is finite for any x1 and x2 .
Consider
1
f (x1 , x2 ) =
2
Z
1
−1
x 2 + x22 − 2ρx1 x2
p
exp − 1
2(1 − ρ2 )
2π 1 − ρ2
1
dρ.
f (x1 , x2 ) is the marginal density of (X1 , X2 )T if ρ ∼ Uniform[−1, 1].
R∞
R
R∞
−x22 /2 and ∞
√1
−∞ f (x1 , x2 )dx1 = 2π e
−∞ −∞ f (x1 , x2 )dx1 dx2 = 1.
f (x1 , x2 ) is the “average” of the elliptic contours of bivariate normal
densities with equal weights.
Kai Zhang (UNC Chapel Hill)
June 10, 2014
4 / 20
The Uniform Correlation Mixture Integral
Let’s think about this integral
2
Z 1
1
x1 + x22 − 2ρx1 x2
p
exp −
dρ.
2(1 − ρ2 )
−1 2π 1 − ρ2
This integral is finite for any x1 and x2 .
Consider
1
f (x1 , x2 ) =
2
Z
1
−1
x 2 + x22 − 2ρx1 x2
p
exp − 1
2(1 − ρ2 )
2π 1 − ρ2
1
dρ.
f (x1 , x2 ) is the marginal density of (X1 , X2 )T if ρ ∼ Uniform[−1, 1].
R∞
R
R∞
−x22 /2 and ∞
√1
−∞ f (x1 , x2 )dx1 = 2π e
−∞ −∞ f (x1 , x2 )dx1 dx2 = 1.
f (x1 , x2 ) is the “average” of the elliptic contours of bivariate normal
densities with equal weights.
What would the contour of f (x1 , x2 ) look like?
Kai Zhang (UNC Chapel Hill)
June 10, 2014
4 / 20
The Uniform Correlation Mixture Integral
Let’s think about this integral
2
Z 1
1
x1 + x22 − 2ρx1 x2
p
exp −
dρ.
2(1 − ρ2 )
−1 2π 1 − ρ2
This integral is finite for any x1 and x2 .
Consider
1
f (x1 , x2 ) =
2
Z
1
−1
x 2 + x22 − 2ρx1 x2
p
exp − 1
2(1 − ρ2 )
2π 1 − ρ2
1
dρ.
f (x1 , x2 ) is the marginal density of (X1 , X2 )T if ρ ∼ Uniform[−1, 1].
R∞
R
R∞
−x22 /2 and ∞
√1
−∞ f (x1 , x2 )dx1 = 2π e
−∞ −∞ f (x1 , x2 )dx1 dx2 = 1.
f (x1 , x2 ) is the “average” of the elliptic contours of bivariate normal
densities with equal weights.
What would the contour of f (x1 , x2 ) look like? Any guess?
Kai Zhang (UNC Chapel Hill)
June 10, 2014
4 / 20
Outline
The Uniform Correlation Mixture Integral
Connection to Khintchine Mixture
Generalizations and Open Questions
Kai Zhang (UNC Chapel Hill)
June 10, 2014
5 / 20
Outline
The Uniform Correlation Mixture Integral
Connection to Khintchine Mixture
Generalizations and Open Questions
Kai Zhang (UNC Chapel Hill)
June 10, 2014
6 / 20
The Uniform Correlation Mixture Integral
A surprising answer:
Z
1
f (x1 , x2 ) =
−1
1
1
f (x1 , x2 |ρ)dρ = 1 − Φ(kxk∞ )
2
2
where Φ(·) is the cdf of standard normal distribution, and
kxk∞ = max{|x1 |, |x2 |}.
Kai Zhang (UNC Chapel Hill)
June 10, 2014
7 / 20
The Uniform Correlation Mixture Integral
A surprising answer:
Z
1
f (x1 , x2 ) =
−1
1
1
f (x1 , x2 |ρ)dρ = 1 − Φ(kxk∞ )
2
2
where Φ(·) is the cdf of standard normal distribution, and
kxk∞ = max{|x1 |, |x2 |}.
Is this correct?
Kai Zhang (UNC Chapel Hill)
June 10, 2014
7 / 20
The Uniform Correlation Mixture Integral
A surprising answer:
Z
1
f (x1 , x2 ) =
−1
1
1
f (x1 , x2 |ρ)dρ = 1 − Φ(kxk∞ )
2
2
where Φ(·) is the cdf of standard normal distribution, and
kxk∞ = max{|x1 |, |x2 |}.
Is this correct?R
I
f (0, 0) =
1
√1
dρ
−1 4π 1−ρ2
Kai Zhang (UNC Chapel Hill)
= 1/4 = (1 − Φ(0))/2.
June 10, 2014
7 / 20
The Uniform Correlation Mixture Integral
A surprising answer:
Z
1
f (x1 , x2 ) =
−1
1
1
f (x1 , x2 |ρ)dρ = 1 − Φ(kxk∞ )
2
2
where Φ(·) is the cdf of standard normal distribution, and
kxk∞ = max{|x1 |, |x2 |}.
Is this correct?R
1
√1
dρ
−1 4π 1−ρ2
= 1/4 = (1 − Φ(0))/2.
I
f (0, 0) =
I
For any x, does f (x, x) = f (x, 0)?
f (x, x) =
R1
F
f (x, 0) =
R1
F
Consider the substitution ρ1 = 1 − 2ρ22 . It is easily seen that
f (x, x) = f (x, 0).
F
Kai Zhang (UNC Chapel Hill)
−1 4π
√1
1−ρ21
1
√
−1 4π 1−ρ2
2
exp{−x 2 /(1 + ρ1 )}dρ1 .
exp{−x 2 / 2(1 − ρ22 ) }dρ2 .
June 10, 2014
7 / 20
The Uniform Correlation Mixture Integral
A surprising answer:
Z
1
f (x1 , x2 ) =
−1
1
1
f (x1 , x2 |ρ)dρ = 1 − Φ(kxk∞ )
2
2
where Φ(·) is the cdf of standard normal distribution, and
kxk∞ = max{|x1 |, |x2 |}.
Is this correct?R
1
√1
dρ
−1 4π 1−ρ2
= 1/4 = (1 − Φ(0))/2.
I
f (0, 0) =
I
For any x, does f (x, x) = f (x, 0)?
f (x, x) =
R1
F
f (x, 0) =
R1
F
Consider the substitution ρ1 = 1 − 2ρ22 . It is easily seen that
f (x, x) = f (x, 0).
F
−1 4π
√1
1−ρ21
1
√
−1 4π 1−ρ2
2
exp{−x 2 /(1 + ρ1 )}dρ1 .
exp{−x 2 / 2(1 − ρ22 ) }dρ2 .
Is this new?
Kai Zhang (UNC Chapel Hill)
June 10, 2014
7 / 20
The Uniform Correlation Mixture Integral
A surprising answer:
Z
1
f (x1 , x2 ) =
−1
1
1
f (x1 , x2 |ρ)dρ = 1 − Φ(kxk∞ )
2
2
where Φ(·) is the cdf of standard normal distribution, and
kxk∞ = max{|x1 |, |x2 |}.
Is this correct?R
1
√1
dρ
−1 4π 1−ρ2
= 1/4 = (1 − Φ(0))/2.
I
f (0, 0) =
I
For any x, does f (x, x) = f (x, 0)?
f (x, x) =
R1
F
f (x, 0) =
R1
F
Consider the substitution ρ1 = 1 − 2ρ22 . It is easily seen that
f (x, x) = f (x, 0).
F
−1 4π
√1
1−ρ21
1
√
−1 4π 1−ρ2
2
exp{−x 2 /(1 + ρ1 )}dρ1 .
exp{−x 2 / 2(1 − ρ22 ) }dρ2 .
Is this new?
I
The anonymous reviewer of the paper is an expert in distribution
theory. He claimed he did extensive literature search without seeing a
similar result.
Kai Zhang (UNC Chapel Hill)
June 10, 2014
7 / 20
Plot of f (x1 , x2 )
0.25
0.2
0.15
0.1
0.05
0
3
2
3
1
2
0
1
0
−1
−1
−2
−2
−3
−3
The isodensity contours of the bivariate density f (x1 , x2 ) are
concentric squares.
The marginal density of f (x1 , x2 ) is the standard normal distribution.
Kai Zhang (UNC Chapel Hill)
June 10, 2014
8 / 20
Further Properties and Implications
The bivariate density f (x1 , x2 ) is the only bivariate density that is
square-contoured and is marginally standard normal (details to be
shown later).
Kai Zhang (UNC Chapel Hill)
June 10, 2014
9 / 20
Further Properties and Implications
The bivariate density f (x1 , x2 ) is the only bivariate density that is
square-contoured and is marginally standard normal (details to be
shown later).
Bayesian connection: This result indicates that if we have a uniform
prior over the correlation ρ, the resulting marginal density depends
only on the maximal magnitude of the two variables.
Kai Zhang (UNC Chapel Hill)
June 10, 2014
9 / 20
Further Properties and Implications
The bivariate density f (x1 , x2 ) is the only bivariate density that is
square-contoured and is marginally standard normal (details to be
shown later).
Bayesian connection: This result indicates that if we have a uniform
prior over the correlation ρ, the resulting marginal density depends
only on the maximal magnitude of the two variables.
Geometric interpretation: If we “average” the elliptic contours of
bivariate normal densities with equal weights, we get a contour of
squares.
Kai Zhang (UNC Chapel Hill)
June 10, 2014
9 / 20
Further Properties and Implications
The bivariate density f (x1 , x2 ) is the only bivariate density that is
square-contoured and is marginally standard normal (details to be
shown later).
Bayesian connection: This result indicates that if we have a uniform
prior over the correlation ρ, the resulting marginal density depends
only on the maximal magnitude of the two variables.
Geometric interpretation: If we “average” the elliptic contours of
bivariate normal densities with equal weights, we get a contour of
squares.
I
Why square-contoured?
Kai Zhang (UNC Chapel Hill)
June 10, 2014
9 / 20
Further Properties and Implications
The bivariate density f (x1 , x2 ) is the only bivariate density that is
square-contoured and is marginally standard normal (details to be
shown later).
Bayesian connection: This result indicates that if we have a uniform
prior over the correlation ρ, the resulting marginal density depends
only on the maximal magnitude of the two variables.
Geometric interpretation: If we “average” the elliptic contours of
bivariate normal densities with equal weights, we get a contour of
squares.
I
I
Why square-contoured?
One thought: Too-much weight on ρ = ±1.
Kai Zhang (UNC Chapel Hill)
June 10, 2014
9 / 20
Further Properties and Implications
The bivariate density f (x1 , x2 ) is the only bivariate density that is
square-contoured and is marginally standard normal (details to be
shown later).
Bayesian connection: This result indicates that if we have a uniform
prior over the correlation ρ, the resulting marginal density depends
only on the maximal magnitude of the two variables.
Geometric interpretation: If we “average” the elliptic contours of
bivariate normal densities with equal weights, we get a contour of
squares.
I
I
I
Why square-contoured?
One thought: Too-much weight on ρ = ±1.
Other thoughts?
Kai Zhang (UNC Chapel Hill)
June 10, 2014
9 / 20
Outline
The Uniform Correlation Mixture Integral
Connection to Khintchine Mixture
Generalizations and Open Questions
Kai Zhang (UNC Chapel Hill)
June 10, 2014
10 / 20
Khintchine’s Theorem (1938)
Theorem
Any univariate continuous random variable X has a single mode if and only if it
can be expressed as the product
X = YU
where Y and U are independent continuous variables and U has a uniform
distribution over [0, 1].
Kai Zhang (UNC Chapel Hill)
June 10, 2014
11 / 20
Khintchine’s Theorem (1938)
Theorem
Any univariate continuous random variable X has a single mode if and only if it
can be expressed as the product
X = YU
where Y and U are independent continuous variables and U has a uniform
distribution over [0, 1].
A beautiful theorem that provides easy proofs to many useful properties of
unimodal distributions. An example below:
Kai Zhang (UNC Chapel Hill)
June 10, 2014
11 / 20
Khintchine’s Theorem (1938)
Theorem
Any univariate continuous random variable X has a single mode if and only if it
can be expressed as the product
X = YU
where Y and U are independent continuous variables and U has a uniform
distribution over [0, 1].
A beautiful theorem that provides easy proofs to many useful properties of
unimodal distributions. An example below:
Corollary
For any unimodal continuous random variable X , E[X 2 ] ≥ 43 E2 [X ].
Kai Zhang (UNC Chapel Hill)
June 10, 2014
11 / 20
Khintchine’s Theorem (1938)
Theorem
Any univariate continuous random variable X has a single mode if and only if it
can be expressed as the product
X = YU
where Y and U are independent continuous variables and U has a uniform
distribution over [0, 1].
A beautiful theorem that provides easy proofs to many useful properties of
unimodal distributions. An example below:
Corollary
For any unimodal continuous random variable X , E[X 2 ] ≥ 43 E2 [X ].
Proof. By the decomposition in Khintchine’s theorem,
4
4
1
E[X 2 ] − E2 [X ] = E[Y 2 ]E[U 2 ] − E2 [Y ]E2 [U] = (E[Y 2 ] − E2 [Y ]) ≥ 0.
3
3
3
Kai Zhang (UNC Chapel Hill)
June 10, 2014
11 / 20
Khintchine’s Theorem (1938)
Theorem
Any univariate continuous random variable X has a single mode if and only if it
can be expressed as the product
X = YU
where Y and U are independent continuous variables and U has a uniform
distribution over [0, 1].
A beautiful theorem that provides easy proofs to many useful properties of
unimodal distributions. An example below:
Corollary
For any unimodal continuous random variable X , E[X 2 ] ≥ 43 E2 [X ].
Proof. By the decomposition in Khintchine’s theorem,
4
4
1
E[X 2 ] − E2 [X ] = E[Y 2 ]E[U 2 ] − E2 [Y ]E2 [U] = (E[Y 2 ] − E2 [Y ]) ≥ 0.
3
3
3
A direct proof could be much harder.
Kai Zhang (UNC Chapel Hill)
June 10, 2014
11 / 20
Khintchine Mixture
Khintchine Mixture: Using Khintchine’s theorem, Bryson and Johnson
(1982) studied bivariate random vectors of the form
X1 = Y1 U1
X2 = Y2 U2
where Uj ’s, j = 1, 2, are uniformly distributed over [0, 1], and the
distributions of Yj ’s are independent of Uj ’s and are chosen such that Xj ’s
have desired unimodal marginal distributions.
Kai Zhang (UNC Chapel Hill)
June 10, 2014
12 / 20
Connection to the Uniform Correlation
Mixture
As one example of such mixtures, Bryson and Johnson (1982) studied the
following mixture
X1 = YU1
X2 = YU2
where Uj ∼ Uniform[−1, 1], j = 1, 2, and Y ∼ χ3 are independent of each
other.
Kai Zhang (UNC Chapel Hill)
June 10, 2014
13 / 20
Connection to the Uniform Correlation
Mixture
As one example of such mixtures, Bryson and Johnson (1982) studied the
following mixture
X1 = YU1
X2 = YU2
where Uj ∼ Uniform[−1, 1], j = 1, 2, and Y ∼ χ3 are independent of each
other.
With this construction, the joint distribution of (X1 , X2 )T satisfies that
Kai Zhang (UNC Chapel Hill)
June 10, 2014
13 / 20
Connection to the Uniform Correlation
Mixture
As one example of such mixtures, Bryson and Johnson (1982) studied the
following mixture
X1 = YU1
X2 = YU2
where Uj ∼ Uniform[−1, 1], j = 1, 2, and Y ∼ χ3 are independent of each
other.
With this construction, the joint distribution of (X1 , X2 )T satisfies that
Xj ’s are marginally standard normal;
Kai Zhang (UNC Chapel Hill)
June 10, 2014
13 / 20
Connection to the Uniform Correlation
Mixture
As one example of such mixtures, Bryson and Johnson (1982) studied the
following mixture
X1 = YU1
X2 = YU2
where Uj ∼ Uniform[−1, 1], j = 1, 2, and Y ∼ χ3 are independent of each
other.
With this construction, the joint distribution of (X1 , X2 )T satisfies that
Xj ’s are marginally standard normal;
the joint density of (X1 , X2 )T is
f (x1 , x2 ) =
Kai Zhang (UNC Chapel Hill)
1
1 − Φ(kxk∞ ) .
2
June 10, 2014
13 / 20
Equivalence of Three Distributions when p = 2
For p = 2, the following bivariate vectors have the same joint distribution:
Kai Zhang (UNC Chapel Hill)
June 10, 2014
14 / 20
Equivalence of Three Distributions when p = 2
For p = 2, the following bivariate vectors have the same joint distribution:
(X1 , X2
)T
such that
X1
X2
∼N
0
0
1 ρ
,
, with
ρ 1
ρ ∼ Uniform[−1, 1].
Kai Zhang (UNC Chapel Hill)
June 10, 2014
14 / 20
Equivalence of Three Distributions when p = 2
For p = 2, the following bivariate vectors have the same joint distribution:
(X1 , X2
)T
such that
X1
X2
∼N
0
0
1 ρ
,
, with
ρ 1
ρ ∼ Uniform[−1, 1].
(X1 , X2 )T such that (X1 , X2 )T = Y (U1 , U2 )T where
Uj ∼ Uniform[−1, 1], j = 1, 2, and Y ∼ χ3 are independent of each
other.
Kai Zhang (UNC Chapel Hill)
June 10, 2014
14 / 20
Equivalence of Three Distributions when p = 2
For p = 2, the following bivariate vectors have the same joint distribution:
(X1 , X2
)T
such that
X1
X2
∼N
0
0
1 ρ
,
, with
ρ 1
ρ ∼ Uniform[−1, 1].
(X1 , X2 )T such that (X1 , X2 )T = Y (U1 , U2 )T where
Uj ∼ Uniform[−1, 1], j = 1, 2, and Y ∼ χ3 are independent of each
other.
(X1 , X2 )T that has a joint density f (x1 , x2 ) = 21 1 − Φ(kxk∞ ) , the
only density that is a function of kxk∞ and is marginally standard
normal.
Kai Zhang (UNC Chapel Hill)
June 10, 2014
14 / 20
Equivalence of Three Distributions when p = 2
For p = 2, the following bivariate vectors have the same joint distribution:
(X1 , X2
)T
such that
X1
X2
∼N
0
0
1 ρ
,
, with
ρ 1
ρ ∼ Uniform[−1, 1].
(X1 , X2 )T such that (X1 , X2 )T = Y (U1 , U2 )T where
Uj ∼ Uniform[−1, 1], j = 1, 2, and Y ∼ χ3 are independent of each
other.
(X1 , X2 )T that has a joint density f (x1 , x2 ) = 21 1 − Φ(kxk∞ ) , the
only density that is a function of kxk∞ and is marginally standard
normal.
Does a similar connection hold in higher dimensions and/or for other
marginal distributions?
Kai Zhang (UNC Chapel Hill)
June 10, 2014
14 / 20
Outline
The Uniform Correlation Mixture Integral
Connection to Khintchine Mixture
Generalizations and Open Questions
Kai Zhang (UNC Chapel Hill)
June 10, 2014
15 / 20
Generalization I: Multivariate Khintchine
Mixture and the “Pyramid” Distribution
It is easy to generalize the Khintchine mixture to higher dimensions:
Kai Zhang (UNC Chapel Hill)
June 10, 2014
16 / 20
Generalization I: Multivariate Khintchine
Mixture and the “Pyramid” Distribution
It is easy to generalize the Khintchine mixture to higher dimensions:
For any p, one simply generates mutually independent Y ∼ χ3 and
U1 , . . . , Up and set Xj = YUj , j = 1, . . . , p.
Kai Zhang (UNC Chapel Hill)
June 10, 2014
16 / 20
Generalization I: Multivariate Khintchine
Mixture and the “Pyramid” Distribution
It is easy to generalize the Khintchine mixture to higher dimensions:
For any p, one simply generates mutually independent Y ∼ χ3 and
U1 , . . . , Up and set Xj = YUj , j = 1, . . . , p.
Straightforward calculus shows that the joint density is a function of
kxk∞ and is marginally standard normal.
Kai Zhang (UNC Chapel Hill)
June 10, 2014
16 / 20
Generalization I: Multivariate Khintchine
Mixture and the “Pyramid” Distribution
It is easy to generalize the Khintchine mixture to higher dimensions:
For any p, one simply generates mutually independent Y ∼ χ3 and
U1 , . . . , Up and set Xj = YUj , j = 1, . . . , p.
Straightforward calculus shows that the joint density is a function of
kxk∞ and is marginally standard normal.
Theorem
Consider a p-dimensional density that is a differentiable function of kxk∞ ,
i.e., gp (x1 , . . . , xp ) = hp (kxk∞ ) for some differentiable function
hp : R+ → R+ . If gp (x1 , . . . , xp ) has standard normal marginal densities,
then
Z ∞
1
2
√
gp (x1 , . . . , xp ) =
y 2−p e −y /2 dy .
p−1
2
2π kxk∞
Kai Zhang (UNC Chapel Hill)
June 10, 2014
16 / 20
Generalization I: Multivariate Khintchine
Mixture and the “Pyramid” Distribution
Theorem
Consider a p-dimensional density that is a differentiable function of kxk∞ ,
i.e., gp (x1 , . . . , xp ) = hp (kxk∞ ) for some differentiable function
hp : R+ → R+ . If gp (x1 , . . . , xp ) has standard normal marginal densities,
then
Z ∞
1
2
√
gp (x1 , . . . , xp ) =
y 2−p e −y /2 dy .
p−1
2
2π kxk∞
Kai Zhang (UNC Chapel Hill)
June 10, 2014
17 / 20
Generalization I: Multivariate Khintchine
Mixture and the “Pyramid” Distribution
Theorem
Consider a p-dimensional density that is a differentiable function of kxk∞ ,
i.e., gp (x1 , . . . , xp ) = hp (kxk∞ ) for some differentiable function
hp : R+ → R+ . If gp (x1 , . . . , xp ) has standard normal marginal densities,
then
Z ∞
1
2
√
gp (x1 , . . . , xp ) =
y 2−p e −y /2 dy .
p−1
2
2π kxk∞
For each dimension p, there is a unique p-dimensional density that is
a differentiable function of kxk∞ and is marginally standard normal.
Kai Zhang (UNC Chapel Hill)
June 10, 2014
17 / 20
Generalization I: Multivariate Khintchine
Mixture and the “Pyramid” Distribution
Theorem
Consider a p-dimensional density that is a differentiable function of kxk∞ ,
i.e., gp (x1 , . . . , xp ) = hp (kxk∞ ) for some differentiable function
hp : R+ → R+ . If gp (x1 , . . . , xp ) has standard normal marginal densities,
then
Z ∞
1
2
√
gp (x1 , . . . , xp ) =
y 2−p e −y /2 dy .
p−1
2
2π kxk∞
For each dimension p, there is a unique p-dimensional density that is
a differentiable function of kxk∞ and is marginally standard normal.
This unique distribution can be obtained through the Khintchine
mixture.
Kai Zhang (UNC Chapel Hill)
June 10, 2014
17 / 20
Generalization I: Multivariate Khintchine
Mixture and the “Pyramid” Distribution
Theorem
Consider a p-dimensional density that is a differentiable function of kxk∞ ,
i.e., gp (x1 , . . . , xp ) = hp (kxk∞ ) for some differentiable function
hp : R+ → R+ . If gp (x1 , . . . , xp ) has standard normal marginal densities,
then
Z ∞
1
2
√
gp (x1 , . . . , xp ) =
y 2−p e −y /2 dy .
p−1
2
2π kxk∞
For each dimension p, there is a unique p-dimensional density that is
a differentiable function of kxk∞ and is marginally standard normal.
This unique distribution can be obtained through the Khintchine
mixture.
2
For p = 1, g1 (x1 ) = √12π e −x1 /2 .
Kai Zhang (UNC Chapel Hill)
June 10, 2014
17 / 20
Generalization I: Multivariate Khintchine
Mixture and the “Pyramid” Distribution
Theorem
Consider a p-dimensional density that is a differentiable function of kxk∞ ,
i.e., gp (x1 , . . . , xp ) = hp (kxk∞ ) for some differentiable function
hp : R+ → R+ . If gp (x1 , . . . , xp ) has standard normal marginal densities,
then
Z ∞
1
2
√
gp (x1 , . . . , xp ) =
y 2−p e −y /2 dy .
p−1
2
2π kxk∞
For each dimension p, there is a unique p-dimensional density that is
a differentiable function of kxk∞ and is marginally standard normal.
This unique distribution can be obtained through the Khintchine
mixture.
2
For p = 1, g1 (x1 ) = √12π e −x1 /2 .
For p = 2, g2 (x1 , x2 ) = 21 1 − Φ(kxk∞ ) .
Kai Zhang (UNC Chapel Hill)
June 10, 2014
17 / 20
Generalization I: Multivariate Khintchine
Mixture and the “Pyramid” Distribution
Theorem
Consider a p-dimensional density that is a differentiable function of kxk∞ ,
i.e., gp (x1 , . . . , xp ) = hp (kxk∞ ) for some differentiable function
hp : R+ → R+ . If gp (x1 , . . . , xp ) has standard normal marginal densities,
then
Z ∞
1
2
√
gp (x1 , . . . , xp ) =
y 2−p e −y /2 dy .
p−1
2
2π kxk∞
For each dimension p, there is a unique p-dimensional density that is
a differentiable function of kxk∞ and is marginally standard normal.
This unique distribution can be obtained through the Khintchine
mixture.
2
For p = 1, g1 (x1 ) = √12π e −x1 /2 .
For p = 2, g2 (x1 , x2 ) = 21 1 − Φ(kxk∞ ) .
For p ≥ 3, gp (0) = ∞.
Kai Zhang (UNC Chapel Hill)
June 10, 2014
17 / 20
Generalization II: Averaging Ellipse
Contoured Densities in p = 2
Would “averaging” any ellipse-contoured family of densities with equal
weights result in a square-contoured density?
Kai Zhang (UNC Chapel Hill)
June 10, 2014
18 / 20
Generalization II: Averaging Ellipse
Contoured Densities in p = 2
Would “averaging” any ellipse-contoured family of densities with equal
weights result in a square-contoured density?
For p = 2, yes.
Kai Zhang (UNC Chapel Hill)
June 10, 2014
18 / 20
Generalization II: Averaging Ellipse
Contoured Densities in p = 2
Would “averaging” any ellipse-contoured family of densities with equal
weights result in a square-contoured density?
For p = 2, yes.
Theorem
Suppose f (x1 , x2 |ρ) ∝
det(Σ)−1/2 v (−xT Σ−1 x/2)
If ρ ∼ Uniform[−1, 1], then
Z
f (x1 , x2 ) ∝
Kai Zhang (UNC Chapel Hill)
∞
1
2
where Σ =
1 ρ
ρ 1
.
1
√
v (−kxk2∞ w )dw .
w 2w − 1
June 10, 2014
18 / 20
Generalization II: Averaging Ellipse
Contoured Densities in p = 2
Would “averaging” any ellipse-contoured family of densities with equal
weights result in a square-contoured density?
For p = 2, yes.
Theorem
Suppose f (x1 , x2 |ρ) ∝
det(Σ)−1/2 v (−xT Σ−1 x/2)
If ρ ∼ Uniform[−1, 1], then
Z
f (x1 , x2 ) ∝
∞
1
2
where Σ =
1 ρ
ρ 1
.
1
√
v (−kxk2∞ w )dw .
w 2w − 1
Example: bivariate t distributions.
Kai Zhang (UNC Chapel Hill)
June 10, 2014
18 / 20
Generalization III: Hypercubically Contoured
Densities with Unimodal Marginals
For any even and differentiable density function ϕ(·) with a unique mode at 0 and
for each dimension p, there is a unique multivariate density
κp (x) = κp (x1 , . . . , xp ) that is a differentiable function of kxk∞ (thus is
hypercubically contoured) and has ϕ(xi )’s as its marginal densities.
Kai Zhang (UNC Chapel Hill)
June 10, 2014
19 / 20
Generalization III: Hypercubically Contoured
Densities with Unimodal Marginals
For any even and differentiable density function ϕ(·) with a unique mode at 0 and
for each dimension p, there is a unique multivariate density
κp (x) = κp (x1 , . . . , xp ) that is a differentiable function of kxk∞ (thus is
hypercubically contoured) and has ϕ(xi )’s as its marginal densities.
Theorem
Consider a p-dimensional density that is a function of kxk∞ , i.e.,
κp (x1 , . . . , xp ) = ψp (kxk∞ ) for some differentiable function ψp : R+ → R+ . If
κp (x) has marginal densities ϕ(xi )’s that are unimodal and differentiable, then
Z ∞
ϕ0 (y )
κp (x1 , . . . , xp ) = −
dy .
p−1
kxk∞ (2y )
Kai Zhang (UNC Chapel Hill)
June 10, 2014
19 / 20
Generalization III: Hypercubically Contoured
Densities with Unimodal Marginals
For any even and differentiable density function ϕ(·) with a unique mode at 0 and
for each dimension p, there is a unique multivariate density
κp (x) = κp (x1 , . . . , xp ) that is a differentiable function of kxk∞ (thus is
hypercubically contoured) and has ϕ(xi )’s as its marginal densities.
Theorem
Consider a p-dimensional density that is a function of kxk∞ , i.e.,
κp (x1 , . . . , xp ) = ψp (kxk∞ ) for some differentiable function ψp : R+ → R+ . If
κp (x) has marginal densities ϕ(xi )’s that are unimodal and differentiable, then
Z ∞
ϕ0 (y )
κp (x1 , . . . , xp ) = −
dy .
p−1
kxk∞ (2y )
This distribution can be constructed through the Khintchine mixture method: Let
Ui ∼ Uniform[−1, 1], i = 1, . . . , p and Y be that fY (y ) = −y ϕ0 (y ) be mutually
independent. Consider Xi = YUi , i = 1, . . . , p. The joint density of (X1 , . . . , Xp )
is κp (x).
Kai Zhang (UNC Chapel Hill)
June 10, 2014
19 / 20
Open Questions
For p ≥ 3, is there a measure µp over p-dimensional correlation
matrices Σ such that
Z
Z ∞
1
2
√
f (x1 , . . . , xp |Σ)dµp (Σ) =
y 2−p e −y /2 dy ?
p−1
2
2π kxk∞
Σ>0
In generalizing the geometric interpretation, does the above result
hold for any general elliptical density u such that
u(x1 , . . . , xp ) ∝ |Σ|−1/2 v (−xT Σ−1 x/2)
for some proper univariate function v ?
Are there any stochastic processes that explain the integral? If so,
can these processes be generalized into higher dimensions?
Bayesian applications: testing, computing, and more?
Kai Zhang (UNC Chapel Hill)
June 10, 2014
20 / 20
© Copyright 2026 Paperzz