MATH1231 Algebra, 2016 Chapter 8: Eigenvalues and eigenvectors A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales [email protected] A/Prof. Daniel Chan (UNSW) MATH1231 Algebra 1 / 41 Motivating example: reflections Example Show that the linear map TA : R2 −→ R2 associated to A = 0 1 1 0 is reflection about some line. Soln To understand the map T , useful to change the co-ord axes to y = x and y = −x. Then T is reflection about a co-ord axis. Q Given A, what are good co-ordinates to understand TA ? A The theory of eigenvalues and eigenvectors gives an answer. A/Prof. Daniel Chan (UNSW) 8.1 Introduction 2 / 41 Definition of Eigenvectors and Eigenvalues Definition (For Linear Maps) Let V = vector space /F & T : V → V be linear. Then if a scalar λ ∈ F and a non-zero vector v ∈ V satisfy T (v) = λv, then λ ∈ F is called an eigenvalue of T and v is called an eigenvector of T for the eigenvalue λ. Note: The domain and codomain are the same vector space. Definition (For Matrices) Let A be an n × n square matrix. Then if a scalar λ ∈ F and non-zero vector x ∈ Fn satisfy Ax = λx, then λ is called an eigenvalue of A and x is called an eigenvector of A for the eigenvalue λ. A/Prof. Daniel Chan (UNSW) 8.1 Introduction 3 / 41 Eigenvalues for reflection Example 0 1 Some e-vectors & e-values for the reflection matrix A = . 1 0 1 1 1 1 =1 ∴ is an e-vector with e-value 1. A = 1 1 1 1 1 1 −1 =− ∴ is an e-vector with e-value − 1. A = −1 −1 1 Remark 1 Any non-zero vector on line of reflection y = x is an e-vector with e-value 1. 2 Sim, any non-zero vector on the orthogonal line y = −x is an e-vector with e-value -1. 3 No other e-vectors so 1, −1 are the only e-values for A. 4 Upshot Eigenvectors give the good co-ord axes! A/Prof. Daniel Chan (UNSW) 8.1 Introduction 4 / 41 Eigenspaces Q How do you find e-vectors & e-values? Partial A Given λ ∈ F, find e-vectors with e-value λ using: Theorem-Defn The λ-eigenspace of a square matrix A is the subspace ker(A − λI ). The e-vectors of A with e-value λ are precisely the non-zero vectors of ker(A − λI ). Proof. v 6= 0 is an e-vector with e-value λ ⇐⇒ Corollary λ is an e-value for A iff ker(A − λI ) 6= 0. A/Prof. Daniel Chan (UNSW) 8.1 Introduction 5 / 41 Examples & Remarks on e-vectors 0 1 E.g. The 1-eigenspace of the reflection matrix A = is 1 0 ker(A − 1I ) = Rem The 0-e-space is just kerA so the e-vectors with e-value 0 are the non-zero vectors in kerA. ker(A − λI ) is closed under scalar multn i.e. v an e-vector with e-value λ =⇒ so is any non-zero scalar multiple of v. A/Prof. Daniel Chan (UNSW) 8.1 Introduction 6 / 41 Finding e-values To find e-values, we use Theorem λ is an e-value of a square matrix A iff det(A − λI ) = 0. Proof. λ is an e-value for A ⇐⇒ ker(A − λI ) 6= 0 ⇐⇒ solns to (A − λI )x = 0 are not unique ⇐⇒ (A − λI ) is not invertible ⇐⇒ det(A − λI ) = 0. A/Prof. Daniel Chan (UNSW) 8.1 Introduction 7 / 41 2 × 2 Matrices Example Find the eigenvalues and eigenvectors of the matrix 3 −2 . −1 2 Solution A/Prof. Daniel Chan (UNSW) 8.1 Introduction 8 / 41 Solution (Continued) A/Prof. Daniel Chan (UNSW) 8.1 Introduction 9 / 41 Solution (Continued) This example, e-values are distinct. Have a basis of e-vectors 2 1 , which gives good co-ords to study A. −1 1 A/Prof. Daniel Chan (UNSW) 8.1 Introduction 10 / 41 Characteristic polynomial Theorem Let A = n × n matrix /C. Then det(A − λI ) is a polynomial of degree n in λ. The polynomial det(A − λI ) is called the characteristic polynomial for A. An n × n matrix always has n e-values in C (not necessarily distinct). If all n e-values are distinct, and B = {v1 , . . . , vn } are e-vectors with distinct e-values, then B is a basis for Cn (and for Rn if the vectors are real). Warning If e-values are not distinct then there may not be a basis of 1 1 e-vectors. This happens for A = . 0 1 A/Prof. Daniel Chan (UNSW) 8.1 Introduction 11 / 41 Example 2 0 Find the eigenvalues and eigenvectors of the matrix . 0 2 Solution Hence, the eigenvalue is 2. A/Prof. Daniel Chan (UNSW) 8.1 Introduction 12 / 41 Solution (Continued) Compute 2-e-space ker(A − 2I ). Solve 0 0 0 (A − 2I )v = 0 ⇐⇒ v= . 0 0 0 v can be anything i.e. 2-e-space is R2 . Hence any non-zero vector in R2 is an e-vector. The e-vectors are 1 0 s +t , 0 1 not both s and t are 0. 1 0 Here the e-values are equal. Still have a basis of e-vectors , . 0 1 A/Prof. Daniel Chan (UNSW) 8.1 Introduction 13 / 41 Example Find the eigenvalues and eigenvectors of the matrix 1 1 . −1 1 Solution 1 1 Let A = . −1 1 E-values of A are the solns to characteristic eqn det(A − λI ) 1−λ 1 det −1 1 − λ (1 − λ)(1 − λ) − (−1) =0 2 (λ − 1) =0 =0 = −1 λ=1±i E-values are 1 + i and 1 − i. A/Prof. Daniel Chan (UNSW) 8.1 Introduction 14 / 41 Solution (Continued) v The e-vectors v = 1 of A for λ = 1 + i are non-zero solns to v2 (A − (1 + i)I )v = 0 1 − (1 + i) 1 −i 1 0 which is v= v= . −1 1 − (1 + i) −1 −i 0 Reduce the augmented matrix with the right hand zero column omitted: −i 1 −1 −i R2 = R2 + iR1 −i 1 −−−−−−−−−−−−−→ 0 0 Hence −iv1 + v2 = 0. Put v2 = t so v1 = −it. E-vectors are −i t , t 6= 0. 1 A/Prof. Daniel Chan (UNSW) 8.1 Introduction 15 / 41 Solution (Continued) v E-vectors v = 1 of A for λ = 1 − i are non-zero solns to v2 (A − (1 − i)I )v = 0 1 − (1 − i) 1 i 1 0 which is v= v= . −1 1 − (1 − i) −1 i 0 Reduce the augmented matrix with the right hand zero column omitted: R2 = R2 − iR1 i 1 i 1 −−−− −−−−−−−−−→ −1 i 0 0 i Hence iv1 + v2 = 0. E-vectors are t , t 6= 0. 1 Here, e-values are distinct & not real. Again have basis of e-vectors −i i , . 1 1 A/Prof. Daniel Chan (UNSW) 8.1 Introduction 16 / 41 Higher Order Square Matrices Example Find the eigenvalues and eigenvectors of the matrix −3 4 2 −3 2 12 −4 2 . A= −5 12 2 −1 −15 4 2 9 Solution (Use Maple) with(LinearAlgebra): A := <<-3, 2, -5, -15>|<4, 12, 12, 4>|<2, -4, 2, 2>|<-3, 2, -1, 9>>; −3 4 2 −3 2 12 −4 2 −5 12 2 −1 −15 4 2 9 > > A/Prof. Daniel Chan (UNSW) 8.1 Introduction 17 / 41 Solution (Continued) Eigenvectors(A); 1 1 0 1 4 −4 1 0 1 2 2 , 1 12 3 1 2 3 8 1 1 1 1 Check these directly! > A/Prof. Daniel Chan (UNSW) 8.1 Introduction 18 / 41 Diagonal matrices are easy to multiply Multiplication of diagonal matrices is easy. c1 0 . . . 0 d1 0 . . . 0 0 c2 0 d2 0 0 C =. , D = . . .. . . . . . . . . . . . . . 0 0 . . . cn 0 0 . . . dn Proposition k d1 c 1 d1 0 ... 0 0 c2 d2 0 0 i) CD = . .. , ii) D k = .. . . . . . . . 0 0 . . . cn dn 0 0 d2k 0 ... .. 0 0 .. . . . . . dnk Note that i) =⇒ ii) by induction. A/Prof. Daniel Chan (UNSW) 8.2 Diagonalisation 19 / 41 Examples of multiplying diagonal matrices Propn clear from any eg A/Prof. Daniel Chan (UNSW) 8.2 Diagonalisation 20 / 41 Diagonalisation: motivation Key Fact A square matrix A can be recovered from a basis of e-vectors & corresponding e-values. Example Let A = 3 × 3 matrix with e-values 1, 3, −2 & the corresponding basis of e-vectors v1 , v2 , v3 . Find A. 1 0 0 0 . Let M = (v1 |v2 |v3 ) and D = 0 3 0 0 −2 AM = A(v1 |v2 |v3 ) = (Av1 |Av2 |Av3 ) = (v1 |3v2 | − 2v3 ) 1 0 0 0 = MD. = (v1 |v2 |v3 ) 0 3 0 0 −2 Columns of M are lin indep =⇒ M is invertible. ∴ M −1 AM = D or A = MDM −1 . A/Prof. Daniel Chan (UNSW) 8.2 Diagonalisation 21 / 41 Diagonalisation Theorem (Diagonalisation) Suppose n × n-matrix A has a basis {v1 , . . . , vn } of e-vectors with corresponding e-values λ1 , . . . , λn . Let M = (v1 |v2 | . . . |vn ) & D = (λ1 e1 | . . . |λn en ) be the diagonal matrix with i-th diagonal entry λi . Then M −1 AM = D. Conversely if M −1 AM = D with D diagonal then the columns of M are a basis of e-vectors of A & the diagonal entries of D give the corresponding e-values. Definition A square matrix A is diagonalisable if there exists an invertible matrix M and diagonal matrix D such that M −1 AM = D. A/Prof. Daniel Chan (UNSW) 8.2 Diagonalisation 22 / 41 Diagonalisation: example Example 3 −2 Diagonalise A = i.e. Find an invertible matrix M & diagonal −1 2 matrix D, so that M −1 AM = D. 2 1 Soln We’ve already seen , are e-vectors with e-values −1 1 A/Prof. Daniel Chan (UNSW) 8.2 Diagonalisation 23 / 41 Diagonalisation: another example Example 1 1 Show A = is not diagonalisable by showing it does not have a 0 1 basis of e-vectors. Solution A/Prof. Daniel Chan (UNSW) 8.2 Diagonalisation 24 / 41 Example cont’d A/Prof. Daniel Chan (UNSW) 8.2 Diagonalisation 25 / 41 Powers of a Matrix Suppose M −1 AM = D or A = MDM −1 . Then An = MDM −1 n = MDM −1 MDM −1 · · · MDM −1 = MD n M −1 . Easy to compute if D is diagonal! Example 3 −2 Let A = . Find A100 and An . −1 2 Solution −1 We use M the diagonalisation AM = D where 2 1 4 0 M= ,D = . −1 1 0 1 A/Prof. Daniel Chan (UNSW) 8.3 Applications 26 / 41 Example cont’d: powers of a matrix Solution (Continued) A/Prof. Daniel Chan (UNSW) 8.3 Applications 27 / 41 Example cont’d: powers of a matrix Solution (Continued) A/Prof. Daniel Chan (UNSW) 8.3 Applications 28 / 41 Why diagonal matrices are good: decoupled equations d1 0 Consider diagonal D = . Eqn Dx = b is really easy to solve ∵ 0 d2 it’s d1 x1 = b1 d2 x2 = b2 The 1st eqn only involves x1 whilst the 2nd only involves x2 . ∴ can solve the eqns separately & we call these eqns decoupled. Upshot row-echelon form =⇒ easy to solve, but diagonal form =⇒ REALLY, REALLY easy to solve. Remark The same is true of differential equations! as we will see on the next slide. A/Prof. Daniel Chan (UNSW) 8.3 Applications 29 / 41 Decoupled ODEs Suppose that a population of hobbits at time t is x(t) & the population of orcs is y (t). If they live separately, say in the Shire & in Mordor, the populations grow independently according to a DE like Example dx = 3x dt dy = dt . 2y Soln These are decoupled & we can solve for x, y separately x(t) = αe 3t , A/Prof. Daniel Chan (UNSW) y (t) = βe 2t . 8.3 Applications 30 / 41 Matrix form for (de)coupled ODEs Let’s now put the hobbits and orcs together in New Zealand. Then they evolve according to a coupled ODE like Example dx = 3x − 2y dt . dy = − x + 2y dt 3 −2 x and A = , so We rewrite in matrix form. Let y = y −1 2 dx 3x − 2y 3 −2 dt 0 y = = = y =⇒ y0 = Ay. −x + 2y −1 2 dy dt The fact that the ODE is coupled corresponds to the fact that A is not diagonal. A/Prof. Daniel Chan (UNSW) 8.3 Applications 31 / 41 Solving ODEs by decoupling Consider diagonalised matrix A = MDM −1 & ODE Change variables to z = M −1 y & use dy dt = Ay. Proposition Given a constant matrix C we have d dt (C y) = C dy dt . Since M −1 A = DM −1 we have dz d dy = M −1 y = M −1 = M −1 Ay = DM −1 y = Dz dt dt dt If λ1 , . . . , λn are the diagonal entries of D, then this decoupled ODE in z can be solved as in the hobbits in the Shire/ orcs in Mordor example α1 e λ1 t α 1 e λ1 t z = ... =⇒ y = M ... α n e λn t αn e λn t A/Prof. Daniel Chan (UNSW) 8.3 Applications 32 / 41 General solution to dy dt = Ay: explicit formula If M = (v1 | . . . |vn ), we can multipy matrices to get Theorem Let A be an n × n matrix with a basis of e-vectors v1 , . . . , vn & corresponding e-values λ1 , . . . , λn . Then the general solution to y0 = Ay is y(t) = α1 e λ1 t v1 + · · · + αn e λn t vn for arbitrary constants α1 , . . . , αn ∈ R. Note It’s easy to check the expression above does give a solution. A/Prof. Daniel Chan (UNSW) 8.3 Applications 33 / 41 Back to hobbits and orcs Example Recall the hobbit/ orc population in NZ is governed by 3 −2 A= . −1 2 1 Find the general soln to the ODE. 2 dy dt = Ay where Find the population as a function of time if the initial population consisted of 4000 hobbits and 1000 orcs. Solution We found e-vectors for A: A/Prof. Daniel Chan (UNSW) 2 −1 1 , with coresponding e-values 4, 1. 1 8.3 Applications 34 / 41 Hobbit/orc example cont’d A/Prof. Daniel Chan (UNSW) 8.3 Applications 35 / 41 Alternate method via change of variable & decoupling 2 1 4 0 ,D = . −1 1 0 1 Change to z = M −1 y. Slide 32 gives decoupled ODE variables 4 0 dz dt = 0 1 z i.e. dz1 dz2 = 4z1 , = z2 . dt dt z1 (t) α1 e 4t General soln z = = , α1 , α2 ∈ R. α2 e t z2 (t) To find integration constants α1 , α2 , we use the initial value for z i.e. z(0) = M −1 y(0) which can be found by solving 4000 Mz(0) = y(0) = . 1000 Recall D = M −1 AM where M = A/Prof. Daniel Chan (UNSW) 8.3 Applications 36 / 41 Alternate method cont’d 2 1 4000 −1 1 1000 R2=R2+ 12 R1 −−−−−−−−→ 2 1 4000 0 32 3000 so z2 (0) = 32 × 3000 = 2000 & 2z1 (0) + 1 × 2000 = 4000 =⇒ z1 (0) = 1000. Thus α1 1000e 4t z1 (0) 1000 = z(0) = = =⇒ z = . α2 2000e t z2 (0) 2000 We go back to original variables 2 1 1000e 4t 2000e 4t + 2000e t = . y = Mz = −1 1 2000e t −1000e 4t + 2000e t A/Prof. Daniel Chan (UNSW) 8.3 Applications 37 / 41 Solving 2nd order ODEs using systems of 1st order ODEs Example Solve y 00 + 4y 0 − 5y = 0 by converting this ODE into a system of first order differential equations. Solution The trick is to let y1 = y and y2 = y10 = y 0 , y and y = 1 . We have y 00 = 5y − 4y 0 = 5y1 − 4y2 so y2 0 y1 y2 0 1 y = = = y. y20 5y1 − 4y2 5 −4 0 A/Prof. Daniel Chan (UNSW) 8.3 Applications 38 / 41 Solution (Continued) 0 1 E-values of A = are the solns to the characteristic eqn 5 −4 det(A − λI ) = 0 −λ 1 det =0 5 −4 − λ −λ(−4 − λ) − 5 = 0 λ2 + 4λ − 5 = 0 (λ − 1)(λ + 5) = 0 λ = 1, −5. A/Prof. Daniel Chan (UNSW) 8.3 Applications 39 / 41 Solution (Continued) v1 of A for e-value 1 are non-zero solns to v2 −1 1 v1 0 (A − I )v = 0 i.e. = 5 −5 v2 0 1 Hence −v1 + v2 = 0. Pick e-vector v = . 1 Sim, solve (A + 5I )v = 0 to get an e-vector for λ = −5. 5 1 v1 0 = i.e. 5v1 + v2 = 0. 5 1 v2 0 1 Pick e-vector v = for λ = −5. General soln is −5 E-vectors v = y=α 1 1 et + β e −5t . −5 1 Hence y = y1 = αe t + βe −5t for constants α, β. A/Prof. Daniel Chan (UNSW) 8.3 Applications 40 / 41 Initial Value Problem Example Solve the IVP y 00 + 4y 0 − 5y = 0, y (0) = 1, y 0 (0) = −5. Solution Use gen soln y = α 1 −5 1 1 et + β e −5t . To find α, β note 1 −5 y (0) = y 0 (0) 1 1 +β = y(0) = α 1 −5 You can now solve by Gaussian elimination, or better still note α = 0, β = 1 must be the soln! Hence 1 y= e −5t =⇒ y = y1 = e −5t . −5 A/Prof. Daniel Chan (UNSW) 8.3 Applications 41 / 41
© Copyright 2026 Paperzz