C:Documents and SettingsFranc

Gaussian vectors
Lecture 5
Gaussian random variables in R n
One-dimensional case
One-dimensional Gaussian density with mean μ and standard deviation σ (called
Nμ, σ 2 ):
fx =
x − μ
1
exp −
2σ 2
2πσ 2
2
.
Proposition If X ∼ Nμ, σ 2 , then aX + b is again Gaussian. Precisely
aX + b ∼ Naμ + b, a 2 σ 2 .
Proof
EϕaX + b =
∫ ϕax + b
x − μ
1
exp −
2
2σ 2
2πσ
2
dx
t = ax + b
dx = dt/a
=
∫ ϕt
t − aμ + b
1
exp −
2a 2 σ 2
a 2πσ 2
2
dt
since
x−μ =
ax + b − aμ + b
.
a
The function
t − aμ + b 2
1
exp −
2a 2 σ 2
a 2πσ 2
is the density of a Gaussian Naμ + b, a 2 σ 2 . The proof is complete.
X−μ
Corollary If X ∼ Nμ, σ 2 , then Z := σ ∼ N0, 1. In other words, any Gaussian r.v.
X ∼ Nμ, σ 2  can be represented in the form
X = σZ + μ
where Z is canonical (standard).
Remark For any random variable X (not necessarily Gaussian), the transformation
X−μ
σ
is called “standardization”. The r.v. Z has always mean zero and standard deviation 1.
However, if X belongs to some class (ex. Weibull), Z does not necessarily belong to the
same class. Unless X is Gaussian. This is one of the reasons why the Gaussian part of
probability theory is called the “linear theory” (invariance by linear, or even affine,
transformations).
Z :=
Exercise (Theoretical) Show that Weibull class is not invariant.
The general property of the previous remark is based on the linearity of the expectation:
EaX + bY + c = aEX + bEY + c
and the quadratic property of the variance:
VaraX + b = a 2 VarX
which hold true for all random variables and constants. See “Appunti teorici terza parte”,
section 2.
Multidimensional case
We give the definition of multidimensional Gaussian variable reversing the previous
procedure.
Definition Canonical (standard) gaussian density in R n :
2
fx 1 , ..., x n  =
2
2
n
1 e − x21 ... 1 e − x2n =
1 e − x1+...+x
2
n/2
2π
2π
2π
2
or in vector notations:
‖x‖
1
exp −
n/2
2
2π
where ‖x‖ is the Euclidean norm of x = x 1 , ..., x n .
fx =
2
0
-2
-1
z.2
1
2
A random vector Z = Z 1 , ..., Z n  with density fx will be called a canonical
Gaussian vector.
A picture of the canonical Gaussian density in dimension n = 2 was given in the first
lecture. A sample of 100 points from a 2-D Gaussian is:
-2
-1
0
1
2
3
z.1
Definition General Gaussian random vector X = X 1 , ..., X n : any random vector of the
form
X = AZ + μ
where Z = Z 1 , ..., Z k  is a canonical Gaussian vector in R k , for some k, A is a matrix with
k input (columns) and n output (rows), and μ is a n-vector.
In plain words: Gaussian vectors: linear (affine) transformations of canonical Gaussian
vectors.
μ = translation.
A: several possibilities: rotation, stretching in some direction... It plays the role of σ
(“large A” means large dispersion), but it is multidimensional. Let us see a few 2-D
examples:
● translation by 1, 1
● multiplication by
2 0
0 1
● multiplication by
followed by 45° rotation, namely multiplication by
0 1
1
2
1 −1
2 0
1
0 1
1
=
2 −1/ 2
2
1/ 2
-4
-3
-2
-1
x[2, ]
0
1
2
3
A=
2 0
-4
-2
0
2
x[1, ]
Proposition Let
Q = AA T
(n × n square, symmetric, matrix). If det Q ≠ 0, then the density of X is
x − μ T Q −1 x − μ
1
exp
−
.
fx =
2
2π n/2 det Q
Level curves of the density: fx = C
x − μ T Q −1 x − μ = R 2 .
They are ellipsoids.
Covariance matrix
More on independence
Recall that two events A and B are called independent if
PA ∩ B = PAPB
(more or less equivalently, if PA|B = PA and PB|A = PB). Two random variables
X, Y are called independent if
PX ∈ I, Y ∈ J = PX ∈ IPY ∈ J
for every interval I, J. If they have a densities f X x, f Y y (called marginals), and joint
density fx, y, then the identity
fx, y = f X x ⋅ f Y y
is equivalent to independence of X, Y.
Remark Z = Z 1 , ..., Z n  canonical Gaussian vector  Z 1 , ..., Z n independent 1-d
Gaussian standard.
Proposition If X, Y are independent, then
EXY = EXEY.
This is not a characterization of independence: it may happen that EXY = EXEY
but X, Y are not independent (the average is only a summary of the density, so a propriety
of product of averages does not imply product of densities).
However, such examples must be “cooked” with intention, they do not happen “at
random”. Moreover:
Proposition If X, Y are jointly gaussian and
EXY = EXEY
they are independent.
(a posteriori of thi lecture we could prove this claim).
Definition Given two random variables X, Y, we call covariance the number
CovX, Y = EX − EXY − EY
= EXY − EXEY.
It is a generalization of the Variance: CovX, X = VarX.
We see that:
CovX, Y = 0

EXY = EXEY.
Definition We say that X and Y are uncorrelated if CovX, Y = 0, or equivalently if
EXY = EXEY.
Corollary Independent implies uncorrelated.
Uncorrelated and jointly gaussian implies independent.
The number CovX, Y gives a measure of the relation between two random variables.
More closely we could see that it describes the degree of linear relation (regression
theory). Large CovX, Y correspondes to high degree of linear correlation.
A drawback of CovX, Y is that it depends on the unit of measure of X and Y: “large”
CovX, Y is relative to the order of magnitude of the other quantities of the problem. The
correlation coefficient
ρX, Y =
CovX, Y
σXσY
is independent of the unit of measure (it is “absolute”), and
− 1 ≤ ρX, Y ≤ 1.
Again, ρX, Y = 0 means uncorrelated. High degree of correlation becomes ρX, Y close
to +1 or −1 (positive or negative linear correlation).
Proposition In general,
VarX + Y = VarX + VarY + 2CovX, Y.
Hence, if X, Y are uncorrelated, then
VarX + Y = VarX + VarY.
This is not linearity of the variance. The first identity comes simply from the property
a + b 2 = a 2 + b 2 + 2ab.
Proposition Cov is linear in both arguments:
CovaX 1 + bX 2 + c, Y = aCovX 1 , Y + bCovX 2 , Y
and similarly in the second argument (it is symmetric). Notice that additive constants c
disappear (as in the variance).
(proof: elementary)
What is Q = AA T
Let us understand better Q = AA T . Write X = AZ + μ in components:
X 1 = A 11 Z 1 + A 12 Z 2 + ... + μ 1
X 2 = A 21 Z 1 + A 22 Z 2 + ... + μ 2
...
and compute
CovX 1 , X 2 
= Cov
∑ A 1i Z i + μ 1 , ∑ A 2j Z j + μ 2
i
=
j
∑ A 1i A 2j CovZ i , Z j 
i,j
=
∑ A 1i A 2i = AA T  1,2 = Q 1,2
i
In general,
CovX h , X k  = AA T  h,k = Q h,k .
Proposition Q = AA T is the “covariance matrix” (matrix of covariances).
Covariance is generalization of variance. Q is generalization of σ 2 from
one-dimensional to multi-dimensional.
Example For the example
2 −1/ 2
A=
2
1/ 2
we have
Q=
2 −1/ 2
2
2
−1/ 2 1/ 2
1/ 2
The covariance between X 1 and X 2 is
2
3
2
=
5
2
3
2
3
2
5
2
.
.
Spectral theorem
Any symmetric matrix, hence Q in particular, can be diagonalized: there exists a new
orthonormal basis of R n where Q is diagonal. The elements of such basis are eigenvectors
of Q, the elements of Q on the diagonal are the corresponding eigenvalues:
Qv i = λ i v i
λ1 0
0
0
...
0
0
0 λn
Q=
in the basis v 1 , ..., v n . The use is to order the eigenvalues in decreasing order.
Example For the example
2 −1/ 2
A=
2
, Q=
1/ 2
1
the eigenvectors are v 1 =
−1
and v 2 =
1
5
2
3
2
3
2
5
2
, with eigenvalues λ 1 = 4, λ 2 = 1.
1
The covarance matrix Q = AA T is also positive semi-definite:
x T Qx ≥ 0
for all vectors x ∈ R n . This is equivalent to
λi ≥ 0
for i = 1, ..., n. Moreover, det Q ≥ 0.
We have
det Q > 0

λ i > 0 for all i = 1, ..., n.
In such a case, the level curves have the form
2
y1
λ1
yn
λn
+ ... +
2
= R2
where y 1 , ..., y n  are the coordinates in the new basis v 1 , ..., v n . They are ellipses with axes
v 1 , ..., v n and amplitudes along these axes equal to λ 1 , ..., λ n . The method of Principal
Component Analysis (PCA) will be based on these remarks.
Example For our usual example, since
v1 =
1
−1
, v2 =
1
1
, λ 1 = 4, λ 2 = 1
1.5
1
y
0.5
-1.5
-1
-0.5
0
0.5
x 1
1.5
-0.5
-1
-1.5
the ellipses have the form:
the equation x T Q −1 x = R 2 , x = x, y T , namely
which can be obtained also from
Q −1 =
0.625 −0.375
−0.375 0.625
0.625x 2 + 0.625y 2 − 0.375 ⋅ 2xy = 1
(R 2 = 1).
Generation of multivariate samples
How to generate Gaussian samples with given
covariance?
In many applications Q is known, but A is not. We want to generate a sample under
X = AZ + μ. Problem:
Q ↦ A?
We have to solve the equation (A is the unknown)
AA T = Q.
The software R gives us the following solution:
require(mgcv)
A<-mroot(Q)
We may choose the dimension k of Z. Simplest choice: k = n = dimension of X. Thus A is a square
matrix.
We may choose A symmetric. Thus the equation is
A 2 = Q.
The solution is
A=
Q.
In practice?
Exercise Assume to know the spectral decomposition of Q, namely the eigenvectors v i and the
eigevalues λ i . Let U be the orthogonal matrix (U T = U −1 ) defined as follows: the i-th column of U is v i .
Check that Q := U T QU is diagonal, with diagonal elements λ i . The matrix
matrix with elements λ i . Then set
Q is simply the diagonal
A := U Q U T .
Check that A is symmetric and A 2 = Q.
Generation of non-gaussian samples
Recall the theorem:
Theorem i) If Y is a random variable with cdf F (continuous case), then the random
variable
U := FY
is uniformly distributed on 0, 1.
ii) If U is a uniform random variable then
F −1 U
is a random variable with cdf F.
Application of both (i) and (ii) gives us:
Corollary If Y is a random variable with cdf F and Φ denotes the cdf of a standard
normal, then
Z := Φ −1 FY
is a standard normal variable. And vice-versa,
Y = F −1 ΦZ.
Algorithm to generate a sample from Y:
● generate a sample from a standard normal variable Z
● compute F −1 ΦZ.
Nothing more than the old one based on uniform? No: multidimensional, correlated!
Let Y 1 , Y 2 two r.v. with cdf F 1 , F 2 (continuous case). Compute
X 1 := Φ −1 F 1 Y 1 ,
X 2 := Φ −1 F 2 Y 2 .
They are standard normal, but not necessarily independent.
Theoretical gap: no reason why X 1 , X 2  should be jointly gaussian (gaussian vector).
Assume X 1 , X 2  jointly gaussian.
● Compute covariance matrix Q of X 1 , X 2 , and average μ = μ 1 , μ 2 .
● Compute A = Q as above (or any other solution of AA T = Q).
● Simulate standard Gaussian vector Z 1 , Z 2 .
● Compute X 1 , X 2  from Z 1 , Z 2  by means of A and μ.
● Anti-transform
Y i = F −1
i = 1, 2.
i ΦX i ,
This is a way to generate samples from non-gaussian correlated r.v. Y 1 , Y 2 .
Multidimensional data fit
We describe only simple rules.
Assume a sample x 1 , y 1 , ..., x n , y n  is given. We cannot plot a joint histogram or cdf.
Thus we cannot get a feeling about gaussianity or not. But we can plot 1-D marginals. A
Gaussian vector has Gaussian marginals.
If we want to model our data by a 2-D Gaussian (either because we see a good
agreement with gaussinaity of the marginals, or because of simplicity), we estimate Q and
μ simply by cov and mean, in R.
Otherwise, if we want to describe marginals by non-gaussian distributions,
● we fit the marginals and find F 1 , F 2 ,
● transform the data by
x ′i = Φ −1 F 1 x i , y ′i = Φ −1 F 2 y i , i = 1, ..., n
into a new sample x ′1 , y ′1 , ..., x ′n , y ′n 
● assume it is jointly gaussian (we only know that x ′1 , ..., x ′n  and y ′1 , ..., y ′n  are
gaussian)
● compute cov and mean of x ′1 , y ′1 , ..., x ′n , y ′n .
This is the model, that we may use for simulation, computation of probabilities and
other purposes.
Example
y
-0.5
0.0
0.5
1.0
1.5
2.0
2.5
Consider the following 20 points in the plane
-0.5
0.0
0.5
1.0
1.5
2.0
2.5
3.0
x
They have been produced artificially by two independent N1, 1 components. Let us
ignore this fact. As an exercise, let us think they are the values of two physical quantities
measured in 20 experiments.
We want to solve the following problem: compute the probability that both components
are positive.
A simple answer is: we count the number of point with positive components, 13 in this
= 0.65. We clearly see that a number of points are close to the
example, and answer 13
20
boundary, thus the result suffers very much the peculiarity of the sample. We are sure that,
if we repeat the experiments, this number may change considerably.
Thus let us extract a model, a 2-D density, from data and compute the theoretical
probability from it. We hope it is a more stabel result.
For simplicity, let us choose a Gaussian fit from the beginning. Compute cov and
mean of data, that in our case are:
Q=
1.001
−0.058
−0.058
0.798
μ=
,
1.146
0.746
-2
0
yy
2
4
We see that in this example the first component is fitted quite well with respect to the true
N0, 1 which generated the sample. The second is not: the second sample is poor. The
correlation between the two samples is very small, good indication of independence.
The (gaussian) model has been found. How to compute the required probability? By
Monte Carlo.
Using require(mgcv), A<-mroot(Q), get A. Then produce N standard points z = z 1 , z 2 ,
transform them by Az + μ,
-2
0
2
4
xx
compute the fraction with both positive components. This is a Monte Carlo approzimation
of the required probability. At the end we find
p = 0.69.
It is not very different from 13
= 0.65. But if we repeat a few times the whole procedure
20
we see that the second estimate is more stable than the first one (not so much, however,
only roughly 20% better).