Eigenvalues and eigenvectors Eigenvalues Bra

1/13
Eigenvalues and eigenvectors
mat-lin2
Eigenvector, vλ, and eigenvalue, λ, of matrix A are defined by
A · vλ = λvλ
or
5/13
Diagonalization of a quadratic form
mat-lin2
Let A be a symmetric matrix and v(j) be its eigenvectors, |v(j)| = 1. Let matrix
(j)
(A − λδ) · vλ = 0
The second equation can hold (for nonzero vector vλ) only if matrix A − λδ
is singular, i.e:
det(A − λδ) = 0
This is an algebraic equation of the n-th degree, with n roots (incl. multiplicity).
Examples: weighed matrix of 2nd derivatives of a potential in a calculation
of fundamental frequencies, heat equation, Schrödinger equation, stochastic
matrix, system of linear differential equations, . . . .
U be composed of column vectors v(j), i.e., Uij = vi . Then v(j) · v(k) = δjk ⇒
UT · U = δ (orthogonal). The eigenvector condition becomes:
v(i)T · A · v(j) = v(i)T · λjv(j) = λjδij
⇒
UT · A · U = Λ
where Λ = diag(λ1, λ2, . . .), Λij = λjδij = λiδij is a diagonal matrix with eigenvalues at the diagonal. The transformed quadratic form is
X
λi u 2
xT · A · x = uT · UT · A · U · u = uT · Λ · u =
i
i
where
x = U · u or
u = U−1 · x
Similarly in C for self-adjoint matrices (T replaced by †)
The unitary transformation U (e.g., a rotation in Rn) transforms a symmetric (in R) or self-adjoint matrix (in C)to a diagonal one. Thus “diagonalization = calculating eigenvectors and eigenvalues”.
2/13
Eigenvalues
mat-lin2
A self-adjoint matrix (in C): A = A†, A† ≡ (A∗)T
Matrix:
Eigenvalues of a self-adjoint (symmetric in R) matrix are real.
A=
Proof: Left-multiply A · v = λv by v†:
X
X
v† · A · v =
v∗i Aijvj =
v∗i λvi = λ|v|2
=
i
X
X
v∗i A∗jivj =
ij
⇒ λ = λ ∗ ⇒ λ ∈ R.
ij
6/13
mat-lin2
x2 − 4xy + y2
A symmetric matrix (in R): A = AT
ij
[pic/qform.sh]
Example – quadratic form
1
−2
−2
1
!
Characteristic equation:
det

∗
X
vjA∗jiv∗i = 
v∗j Ajivi = λ∗|v|2
1−λ
−2
−2
1−λ
!
= λ2 − 2λ − 3
roots: λ1 = −1, λ2 = 3. Equations for the eigenvectors:
1
Av1 = −v1 ⇒ v1 =
1
−1
Av2 = 3v2 ⇒ v2 =
1
And normalized eigenvectors = a basis:
√
√ !
1/ 2 −1/ 2
v=
√
√
1/ 2 1/ 2
ij
This is the rotation by 45◦.
3/13
Bra-ket notation
mat-lin2
7/13
Signature of a quadratic form
mat-lin2
Vector = “ket” = |vi, |vii = vi
The signature = number of (positive,negative,zero) eigenvalues.
Co-vector = “bra” = |vi† = hv|, hv|i = v∗i
P
P
Scalar product: hu|vi = i u∗i vi = ihu|i|vii
Example: the signature of x2 − 4xy + y2 is (+, −) or (n+, n−, n0) = (1, −1, 0).
Operator: A or Â: A|vi or |Avi or |A|vi; (A|vi)i =
P ∗
ij ui Aij vj
Matrix element: hu|A|vi = hu|(A|vi) =
For a Hermitean matrix: hu|A|vi = hv|A|ui∗
P
For f(xi) “countinuous enough”, the condition for an extreme is:
∂f
= 0, i = 1, . . . , n
∂xi
j Aij vj
If this holds true for x0, the Taylor expansion at the minimum is:
f(x) = f(x0) +
!
Proving λ ∈ R again: hv|A|vi = hv|λvi = λhv|vi = hv|A|vi∗ = λ∗hv|vi
1X
0
(xi − x0
i )Aij (xj − xj ),
2
ij
Operator acting on a bra: hu|A = bra such that (hu|A)|vi = hu|A|vi ∀v;
P
in coordinates: (hu|A)j = i u∗i Aij
Aij =
∂f2
∂xi∂xj |x =x0,x =x0
i
i
j
j
If the signature of A is (n, 0, 0) = (++++...), the form is positive definite
and f has a local minimum at x0.
If the signature of A is (0, n, 0) = (− − − − ...), then the form is negative
definite, and f has a local maximum at x0.
If the signature contains pluses and minuses, it is indefinite, and f has
a saddle point at x0.
Orthogonality of eigenvectors
4/13
mat-lin2
Eigenvectors (of different eigenvalues) of a self-adjoint matrix are perpendicular. Proof – from the right
hv(2)|A|v(1)i = hv(2)|λ1v(1)i = λ1hv(2)|v(1)i
– and from the left:
hv(2)|A|v(1)i = hv(2)λ2|v(1)i = λ∗2hv(2)|v(1)i
which can hold (for λ1 6= λ2), only if hv(1)|v(2)i = 0. We can always orthonormalize a subspace of degenerate eigenvalues, hence a self-adjoint matrix
generates an orthonormal basis.
A similar statement (called the spectral theorem) holds for compact
Hilbert operators in ∞-dimensional Hilbert spaces. An operator is compact
if it is bounded and it maps a compact (= closed + bounded) set to a
set whose closure space is compact (closure = set + boundary); a map of
a sequence in a 1-ball contains a Cauchy subsequence (which converges);
loosely, an image of an R-ball does not extent to infinity.
Loosely, “infinities arising by the operator are managaeble”
∂ is selfExamples: diag{1, 1/4, 1/9, ...} is compact self-adjoint, p̂x = −ih ∂x
adjoint but not compact. Identity in ∞-dimensional space is not compact.
Examples see mat-lin4.mw
+
Sylvestr criterion
8/13
mat-lin2
We calculate the subdeterminants:
det |Aij|i,j=1
det |Aij|i,j=1..2
det |Aij|i,j=1..3
All ar positive at point x0: minimum.
Alternating signs at point x0 (−, +, −, . . .): maximum.
The proof uses the spectral theorem and the Cholesky decomposition of a
Hermitean matrix A = L∗ · L, where L is a triangular matrix.
9/13
Fundamental vibrations
mat-lin2
13/13
Example
mat-lin2
Let PES be Upot(ττ), τ = {~r1, . . . ,~rN, }, with a (local) minimum at τ min,
deviation from the minimum: ∆ττ = τ − τ min.
ẋ = y, ẏ = −x
Taylor expansion to the 2nd order (=0):
Upot(ττ) = Upot(ττmin) +
X ∂Upot
∂~ri
i
(ττmin) · ∆~ri +
Newton equations of motion:
mi∆~r¨i ≡ mi
1X
2
i,j
∆~ri ·
∂2Upot
∂~ri∂~rj
A=
(ττ) · ∆~rj
General solution:
X
∂2∆~ri ~
= fj = −
Aij∆~rj
∂t2
j
∂2Upot
(ττ
),
∂~ri∂~rj min
[tchem/showvib.sh]
10/13
M · ∆τ̈τ = −A
A · ∆ττ, where M = diag(m1, m1, m1, . . . , mN, mN, mN)
∆ττ = M −1/2 · U · u
where U is orthogonal. By inserting:
M · M −1/2 · U · ü
u = −A
A · M −1/2 · U · u
Left-multiplied by U −1 · M −1/2· :
Λ = U −1 · M −1/2 · A · M −1/2 · U
There exists an orthogonal matrix U so that Λ = U −1 · M −1/2 · A · M −1/2 · U
is diagonal, in other words, we diagonalize the symmetric matrix A 0 :
A 0 = M −1/2 · A · M −1/2
The Newton equations separate into 3N independent harmonic oscillators:
α = 1, . . . , 3N
The frequences are (6 for general molecules, 5 for linear molecules are zero)
√
Λαα
να =
2π
11/13
mat-lin2
Two atoms connected by a spring:
K
(x − y)2
2
⇒
A0 =
K/m
−K/m
−K/m
K/m
!
⇒
B = diag(2K/m, 0)
The frequences are
p
2K/m
(sym. stretch), ν2 = 0 (translation)
2π
Unnirmalized eigenvectors:
!
!
1
1
ψ1 =
, ψ2 =
−1
1
ν1 =
→←
→→
12/13
Homogeneous linear differential eqs. of the 1st order mat-lin2
The system of homogeneous linear differential equations of the 1st order:

ẋ1 = A11x1 + A12x2 + · · · + A1nxn 


ẋ = A · x
...


ẋ = A x + A x + · · · + A x 
n
n1 1
n2 2
nn n
One of n linearly independent solutions:
x = eλtv
⇒
1
i
, v−i =
i
1
x = iCieit + C−ie−it
y = Cieit + iC−ie−it
ẍ = −x (harmonic oscillator)
We are looking for a transformation (basis) in the form
Upot =
1
i
, v−i =
i
1
Equivalent differential equation of the 2nd order:
mat-lin2
Fundamental vibrations – diatomic molecule
vi =
x = cos(t), y = − sin(t)
M · ∆τ̈τ = −A
A · ∆ττ, where M = diag(m1, m1, m1, . . . , mN, mN, mN)
üα = −Λααuα,
λ = ±i,
With initial conditions x(0) = 1, y(0) = 0: iCi = C−i = 1/2, so that
In the matrix form (vector = 3N numbers, matrix = 3N × 3N):
ü
u = −Λ
Λ · u,
−1 0
⇒
Civieit + C−iv−ie−it
∆~ri = ~ri − ~ri,min
Fundamental vibrations
1
vi =
where the so called Hessian matrix is
Aij =
0
!
A · v = λv
For real A, λ are real or complex conjugate pairs.
General solution if all λ’s are diferent:
X
x=
Cλeλtvλ
λ
where Cλ’s are determined from the initial conditions.
If there are multiple eigenvalues (roots of the characteristic equation), we
have eλt, teλt, etc.
The set is always equivalent to one homogeneous linear differential equations of the n-th order