Throughout, the square root of a positive number a denotes its

√
Throughout, the square root of a positive number a denotes its positive square root + a.
1. Let XR be the space of smooth functions R → R with compact support1 and inner product
(f, g) = R f g dx. Dene L : X → X by Lf = f 0 V 0 + f 00 where V ∈ X is xed. Find the adjoint
L∗ of L, that is, an operator L∗ : X → X such that
(Lf, g) = (f, L∗ g),
for all f, g ∈ X.
Solution. Using integration by parts, we have
Z
(Lf, g) =
Z
g(f 0 V 0 + f 00 ) dx
Z
Z
= gf 0 V 0 dx + gf 00 dx
Z
Z
= − f (gV 0 )0 dx − f 0 g 0 dx
Z
Z
= − f (g 0 V 0 + gV 00 ) dx + f g 00 dx
Z
= f [−g 0 V 0 − gV 00 + g 00 ] dx,
gLf dx =
where we used the fact that f, g, V have compact support to see that the boundary terms in the
integration by parts are all 0. To get (Lf, g) = (f, L∗ g) we dene L∗ by L∗ g = −g 0 V 0 −gV 00 +g 00 .
2. Let A be an invertible complex matrix. Let V be a unitary matrix whose columns are
eigenvectors of A∗ A, and let Σ be the diagonal matrix whose diagonal entries are the square
roots of the corresponding eigenvalues.
(a) Let U = AV Σ−1 . Show that A = U ΣV ∗ is a singular value decomposition of A.
(b) Let A be normal. Show that the columns of U are then eigenvectors of A∗ A.
(c) Let A be normal. Show that despite (b), in general A 6= V ΣV ∗ , A 6= U ΣU ∗ .
Solution. (a) Since U ΣV ∗ = AV Σ−1 ΣV ∗ = A, all we have to show is that U is unitary. Since V
is a unitary matrix whose columns are the eigenvectors of A∗ A, we have V ∗ A∗ AV = D where
D is the diagonal matrix with diagonal entries equal to the corresponding eigenvalues. Thus,
D = Σ2 and
U ∗ U = (Σ−1 )∗ V ∗ A∗ AV Σ−1 = Σ−1 V ∗ A∗ AV Σ−1 = Σ−1 DΣ−1 = I,
where we used the fact that Σ−1 is real and diagonal to get (Σ−1 )∗ = Σ−1 .
(b) Observe that
U DU ∗ = AV Σ−1 D(Σ−1 )∗ V ∗ A∗ = AV Σ−1 DΣ−1 V ∗ A∗ = AV V ∗ A∗ = AA∗ = A∗ A,
1
Smooth means innitely dierentiable; compact support means the function is identically zero outside a
bounded interval
1
which shows that the columns of U are eigenvectors of A∗ A.
(c) Note that if A = V ΣV ∗ or A = U ΣU ∗ then A would be Hermitian, but not all normal
matrices are Hermitian. The matrix in Problem 4 serves as a counterexample:


1 1 0
A = 0 1 1
1 0 1
is invertible and normal but not Hermitian.


 
1 0
1
3. Let A = 0 1 and b = 1.
1 1
1
(a) Find a singular value decomposition of A.
(b) Use your answer in (a) to nd the pseudoinverse of A.
(c) Use your answer in (b) to nd the least squares solution minimizing kAx − bk2 .
√ √ −1/√ 2
2 1
1/√2
and v2 =
are orthonormal eigenvectors of A∗ A =
Solution. v1 =
1 2
1/ 2
1/ 2
corresponding to eigenvalues 3, 1. Thus,

√
√ √
3 √0
1/√2 −1/√ 2
V =
,
Σ= 0
1 .
1/ 2 1/ 2
0
0
Similar to Problem 2, we obtain the columns of U via
√ 
 √ 

1/
6
−1/
√
√
√
√2
u1 = Av1 / 3 = 1/√6 ,
u2 = Av2 / 1 =  1/ 2  .
0
2/ 6
Extending these vectors to an orthonormal basis of R3 gives
√
√ 
 √
1/√6 −1/√ 2 1/√3
U = 1/√6 1/ 2
1/ √3  .
2/ 6
0
−1/ 3
(b) The pseudoinverse Σ+ is obtained by inverting the nonzero entries of ΣT :
√
1/ 3 0 0
+
Σ =
0
1 0
and this is used to dene the pseudoinverse A+ of A via
2/3 −1/3 1/3
A+ = V Σ + U ∗ =
.
−1/3 2/3 1/3
2
(c) The least squares solution is given by
2/3
x=A b=
.
2/3
+


1 1 0
4. Let A = 0 1 1. Show A is normal and nd unitary diagonalizations2 of A by:
1 0 1
(a) computing [A]ββ for a basis β of unit length eigenvectors of A;
(b) nding a singular value decomposition of A.
Solution. A is normal since


2 1 1
AA∗ = A∗ A = 1 2 1 .
1 1 2
(a) A basis of unit eigenvectors of A is

 
  1 
√1
√1
√



3
3
3 
− √



1
i 
1
i
1
β =  2 3 + 2  , − 2√3 − 2  ,  √3 


 − √

1
1
√1
− 2i
− 2√
+ 2i
3
2 3
3
√
√
The corresponding eigenvalues are 1/2 + 3i/2, 1/2 − 3i/2, and 2, so


√
1
+ 23i
0√
0
2


3i
1
[A]ββ =  0
0
2 − 2
0
0
2
In other words, A = SDS −1 where the colums of S are the vectors in β and D = [A]ββ .
(b) Following the prescription in Problem 2, we get
√
√ 
 √
1/√6
1/ √2 1/√3
V =  1/ √6 −1/ 2 1/√3
−2/ 6
0
1/ 3
from the eigenvectors of A,


1 0 0
Σ = 0 1 0 
0 0 2
from the square roots of the corresponding eigenvalues, and
√ 
 √
2/ √6
0√ 1/√3
U = AV Σ−1 = −1/√6 −1/√ 2 1/√3 .
−1/ 6 1/ 2 1/ 3
2
A unitary diagonalization of A consists of a unitary matrix Q and a diagonal matrix D such that A = QDQ∗ .
3
√

5. Find a polar decomposition of A = 
5
2i
3 + 3
√
5
− 3i
√3
5
i
3 − 3
√
5
− 3i
√3
5
2i
3 + 3
√
5
i
3 − 3
√

5
− 3i
√3

5
− 3i .
√3
5
2i
3 + 3
Solution. We obtain the polar decomposition from the singular value decomposition
A = U ΣV ∗
via P = V ΣV ∗ and Q = U V ∗ . The SVD can be found as in Problem 2, and the result is
√
√
√

 1 2i 1

1
i
i
3 + 3
3 − 3
3 − 3
√5/3 + 2/3 √5/3 − 1/3 √5/3 − 1/3
1
i 
P = √5/3 − 1/3 √5/3 + 2/3 √5/3 − 1/3 ,
Q =  13 − 3i 31 + 2i
3
3 − 3
1
1
1
i
i
2i
5/3 − 1/3
5/3 − 1/3
5/3 + 2/3
3 − 3
3 − 3
3 + 3
6. Let A be positive denite, that is, Hermitian with positive eigenvalues. Show there is a unique
√
√ 2
positive denite matrix A such that A = A.
Solution. By the spectral theorem for self-adjoint maps, A = U DU ∗ where U is a unitary matrix
whose colums are the eigenvectors of A, and D is a diagonal matrix whose diagonal entries are
the eigenvalues
Since
√ of A, which are all real.
√
√ A is positive
√ denite, the eigenvalues are all
positive, and A can be dened via A = U DU ∗ where D is obtained from D by taking
square roots of all the entries of D. This takes care of existence. For uniqueness, suppose there
is another positive denite matrix B such that B 2 = A. Then B = V ΣV ∗ where Σ is a diagonal
matrix whose diagonal entries are the eigenvalues of B , which are all real and positive. As
A = B 2 = V Σ2 V ∗ , we conclude that the diagonal entries of Σ2 are the eigenvalues of A. Thus,
the diagonal entries of Σ are the square roots of the eigenvalues of A, i.e., Σ =√D. Moreover,
the columns of V are the corresponding eigenvectors of A. It follows that B = A.
Note: Following the proof, it is easy to see that a positive semidenite matrix A that is, a
Hermitian matrix with nonnegative eigenvalues has a unique positive semidenite square root.
7. Let A be a square matrix. In any polar decomposition A = QP , show that P =
√
A∗ A.
Solution. Let A = QP be a polar decomposition of A. Then Q is unitary and P is Hermitian,
√
so A∗ A = (QP )∗ QP = P ∗ Q∗ QP = P ∗ P = P 2 . From Problem 6, we conclude that P =
A∗ A.
8. Let A be an invertible complex matrix. Use the singular value decomposition to show we
can write A = P Q where P is Hermitian positive denite and Q is unitary. This is called a
reverse polar decomposition. Show that A is normal if and only if the Hermitian positive denite
matrices from its polar and reverse polar decompositions are the same.
4
Solution. Write A = Q̃P̃ for a polar decomposition and A = P Q for a reverse polar decomposition. As we saw in Problem 7,
A∗ A = P̃ ∗ Q̃∗ Q̃∗ P̃ ∗ = P̃ ∗ P̃ = P̃ 2 ,
and similarly
AA∗ = P QQ∗ P ∗ = P P ∗ = P 2 .
If A is normal, then by Problem 6, P̃ = P . Conversely, if P = P̃ , then P 2 = P̃ 2 so A is normal.
5