Lectures Notes for Differential equations (022B) DR AF Tv 2- R. G ra n er oBe li nc hó n Rafael Granero Belinchón, [email protected], Department of Mathematics, University of California, Davis, Fall Quarter 2014 User’s guide er oBe li nc hó n These are the lecture notes of my 22B course on Differential equations. These notes are based on the book Elementary differential equations and boundary value problems by W.E. Boyce and R.C. DiPrima. The goal of these notes is to help the students to understand the material covered during the classes and to serve as a supplement of the textbook. These notes are not written to replace a careful reading of the previously mentioned book. Consequently, it is highly recommended to study also the book. This course should be taken cautiously in the sense that the average student must study several hours per week. In particular, during the lectures and in these notes there are a number of exercises and examples that every student should try. DR AF Tv 2- R. G ra n There are two types of text in these notes other than the standard one. There are several memories from other courses. I called them memento (not only from the english word ’memento’, but also for the latin sentence ’memento studere’). In these notes I also provide advanced material called one step forward. Finally, let me emphasize that it is important to become familiar with all the material in these lecture notes as a first step towards the proficiency in the subject required for the exams and real-life situations. DR AF Tv 2R. G ra n er oBe li nc hó n Contents nc hó n 1 Introduction 1.1 Malthus’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 More on the classification of the DE . . . . . . . . . . . . . . . . . . . . . . DR AF Tv 2- R. G ra n er oBe li 2 First order ODE 2.1 Integrating factors . . . . . . . . . . . . . . . . . . . 2.2 Separable equations . . . . . . . . . . . . . . . . . . 2.3 Existence, uniqueness and modeling . . . . . . . . . 2.3.1 Modeling . . . . . . . . . . . . . . . . . . . . 2.3.2 Existence and uniqueness . . . . . . . . . . . 2.4 Equations from population dynamics . . . . . . . . . 2.4.1 Malthus Law . . . . . . . . . . . . . . . . . . 2.4.2 Logistic growth . . . . . . . . . . . . . . . . . 2.4.3 Threshold . . . . . . . . . . . . . . . . . . . . 2.4.4 The Lotka and Volterra predator-prey model 2.5 Forward Euler method . . . . . . . . . . . . . . . . . 2.6 Existence Theorem revisited . . . . . . . . . . . . . . 2.7 Discrete models . . . . . . . . . . . . . . . . . . . . . 3 Second order ODE 3.1 The wronskian . . . . . . . 3.2 Complex roots . . . . . . . 3.3 Repeated roots . . . . . . . 3.4 Examples . . . . . . . . . . 3.5 Nonhomogeneous equations 3.6 Resonance . . . . . . . . . . 3.7 The Laplace transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 5 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 9 12 16 17 19 21 23 24 27 28 30 33 35 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 43 47 48 49 51 56 57 4 Systems of ODE 61 4.1 Review of matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.2 System of ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 4.3 The matrix exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 DR AF Tv 2R. G ra n er oBe li nc hó n List of Figures AF Tv 2- R. G ra n er oBe li nc hó n Dr. Zombie by Jorge David (Wikipedia) . . . . . . . . . . . . . . . . . SIR Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . a) Cable about the muskrats to the New York Times b) A muskrat. . Solutions corresponding to different initial data, P0 . . . . . . . . . . . Evolution of the shark captures (percentage) . . . . . . . . . . . . . . Fishes vs. Sharks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exact solution (red) and approximate solution (blue) corresponding to a) k = 2, y0 = 0.5,b) k = 2, y0 = 0.45, c) k = 2, y0 = 0.55 . . . . . . . a) k = 3, y0 = 0.5,b) k = 3, y0 = 0.45, c) k = 3, y0 = 0.55 . . . . . . . a) k = 3.7, y0 = 0.5,b) k = 3.7, y0 = 0.45, c) k = 3 − 7, y0 = 0.55 . . . k from 0 to 4, y0 = 0.3. . . . . . . . . . . . . . . . . . . . . . . . . . . DR 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 . . . . . . . . . . . . the . . . . . . . . . 19 . 22 . 23 . 26 . 28 . 29 same initial data, P0 = . 38 . 38 . 39 . 39 DR AF Tv 2R. G ra n er oBe li nc hó n DR AF Tv 2R. G ra n er oBe li nc hó n DR AF Tv 2R. G ra n er oBe li nc hó n Chapter 1 n Introduction er oBe li nc hó Let’s start with the basic definition: Definition 1.1. An ordinary differential equation (ODE) is an expression of the (general) form dn dn−1 ′ ′′ P (t) = F (P (t), P (t), P (t), ..., P (t), t). dtn dtn−1 (1.1) 2- R. G ra n n is the order of the differential equation and the function F is called the rate func~ (P (t), P ′ (t), ...t) ∈ Rd then tion. When P (t) = P~ (t) ∈ Rd and F (P (t), P ′ (t), ...t) = F it is called a system of differential equations. If F is a linear function in the variables dn−1 Otherwise it is called nonlinear (see P (t), P ′ (t), ... dt n−1 P (t) the ODE is called linear. Example 1.1 below). AF Tv In other words, a differential equation is an equation where the unknown is a function, P (t). This equation establish a relationship between the unknown P (T ) and its derivatives. The word ordinary is due to the fact that the function depends on a single variable, P = P (t). DR In general, one may think that the independent variable t denotes the time. [*** Why should we study them? It is a very important (and widely spread) tool to understand (and make predictions) several phenomena in real life: • Physics (fluid dynamics, weather prediction) • Chemistry (chemical reactions) • Biology (population dynamics) ***] Example 1.1 (Exercise 1-3, Section 1.3). Determine the order of the ODE. Also state when they are linear or nonlinear: Solution: 1) order 2, linear. 2) order 2, nonlinear. 3) order 4, linear. Example 1.2. Solve the differential equation P ′ (t) = t. 1 Chapter 1. Introduction Solution: We know that we have to compute the antiderivative of the function F (P (t), t) = t. In this case, t2 + C. 2 Notice that this antiderivative is not unique (it depends on the value of the constant C). Consequently, t2 +C P (t) = 2 n is not unique either.[*** Check that P (t) is a solution to the differential equation. ***] Memento 1.1. Recall that the Fundamental Theorem of Calculus says that, given f (x), its antiderivative is unique up to an additive constant. In other words, if F (x) verifies F ′ (x) = f (x), then G(x) = F (x) + c also verifies G′ (x) = f (x). er oBe li nc hó To guarantee that every differential equations has only one solution, it is common to attach an initial condition of the form P (0) = P0 ∈ R. Example 1.3. Solve the differential equation P ′ (t) = t with initial condition P (0) = 1. ra n Solution: We know from the previous example that the general solution is G t2 + C, 2 R. P (t) = AF Tv 2- where C is any constant. Now we impose P (0) = 1 and we get P (0) = 0 + C = 1 ⇒ C = 1. DR Consequently, the unique solution to the previous initial value problem is P (t) = t2 + 1. 2 Example 1.4. Write a mathematical model of a falling object near sea level without considering friction. Solution: We measure the mass in kg, time in seconds and the height in meters. Let’s write M for the mass of the object. Then the unknown is the height at time t. We denote this unknown function by h(t). Notice that the book consider that the velocity is positive in the downward direction (the object is falling). We consider our origin of coordinates at the sea level (height zero) and then we consider that the velocity is negative in the downward direction (the sign meaning that the object is falling). The Second Newtonian Law states that the force equals mass times acceleration, i.e. F = M a. 2 We know that, near sea level, the gravity acts with acceleration given by −g = −9.8m/s2 . Notice that the − in front indicates that the movement is directed downward (compare with the book). Now notice that if h(t + s), h(t) are two different heights for different times t < t + s, we have h(t + s) − h(t) , v̄ = s is the mean velocity (with units of m/s). Taking the limit s → 0, we recover h′ (t) = velocity at t. With the same reasoning, we have Collecting both expressions for the acceleration, we obtain er oBe li h′′ (t) = −gM. nc hó n h′′ (t) = acceleration at t. ra n Memento 1.2. Recall that, for definite integrals, the Fundamental Theorem of Calculus reads Z b f (x)dx, F (b) − F (a) = a G F ′ (x) AF Tv 2- R. where = f (x). Example 1.5. Solve the previous mathematical model of a falling object near sea level without considering friction assuming that the initial height is h(0) = 1 and the initial velocity is h′ (0) = 0. Solution: The model is DR h′′ (t) = −gM. Consequently, applying the Fundamental Theorem of Calculus (FTC) Z t ′ ′ −gM ds = −gM t. h (t) − h (0) = 0 So, applying FTC again h(t) − h(0) = Z t ′ h (s)ds = 0 Z 0 We get the final answer t −gM sds = −gM t2 . 2 gM t2 . 2 In particular, h(t) < h(0) and the object is falling. Example 1.6. Write a mathematical model of a falling object near sea level considering friction. h(t) = 1 − 3 Chapter 1. Introduction Solution: The drag due to friction is usually modeled with a term γv(t), where v(t) is the velocity and γ is a constant with units of kg/s. Consequently, the term γv(t) is an acceleration. Notice that we write with a + sign because the friction is opposed to the movement (consequently, it’s pointing upward). We use our previous model with the new term and we get h′′ (t) = −gM + γv(t). As v(t) = h′ (t), we can write the previous equation as n v ′ (t) = −gM + γv(t). er oBe li nc hó Notice that we can, for different values of (t, v(t)) compute v ′ (t). Consequently, we can plot segments with slope v ′ (t) for each point (t, v(t)) (see Figure 1.1.2, pag. 3 and 4 in the textbook). This kind of plots are called slope or direction fields. G ra n [*** This slope field gives us a useful interpretation of the solution of an ODE. We can think on the solution of an ODE as the trajectory of a particle according to this slope field. In other words, this theoretical particle will move following a trajectory tangential to the slope field. ***] Definition 1.2. For a given ordinary differential equation (ODE), 2- R. P ′ (t) = F (P (t)), AF Tv the points pi ∈ R such that F (pi ) = 0, DR are called equilibrium solutions. Notice that if P ′ (t) = 0, P (t) does not change with time. Example 1.7. Find the equilibrium solution of the model derived in Example 1.6. Solution: We have to solve the equation −gM + γv̄ = 0 ⇒ v̄ = gM . γ So, v(t) ≡ v̄ = gM , γ is the equilibrium solution. In this context, the equilibrium solution is called terminal velocity. [*** Maybe you want to try by yourself Problems 1,2,11,13,15,22, Section 1.1 ***] 4 1.1. Malthus’s Law 1.1 Malthus’s Law Let’s introduce our first model in population dynamics: Malthus’ Law. This population dynamics model was introduced by Thomas Robert Malthus (XVIII). There are two basic hypotheses: 1. the population change is proportional to itself. 2. there is no growth restriction (as it might be caused by finite space,...) The ordinary differential equation (ODE) is dP = rP (t), P (0) = P0 , dt nc hó n where P0 ≥ 0 is the initial population and r ∈ R is the constant of proportionality. Example 1.8. Solve Malthus’ Law. Solution: We solve that using that it’s a separable equation: Z 0 t dt ⇒ ln(P (t)) − ln(P0 ) = r(t − 0) ⇒ P (t) = P0 ert . ra n P0 dP =r P G and, integrating, Z P (t) er oBe li dP dP = aP (t) ⇒ = rdt, dt P 2- R. Remark 1.1. When r < 0, Malthus law models exponential decay and it is widely used as a model of radiactive decay. AF Tv In this way, we can find the solution of y ′ (t) = y(t), y(0) = y0 , DR for y0 ∈ R fixed but arbitrary. The solution is then y(t) = y0 et . As this solution is valid for every initial data, is called a general solution. The geometrical representation of the general solution y(t) is an infinite family of curves called integral curves. Each integral curve is associated with a particular value of y0 and is the graph of the solution corresponding to that initial data. [*** Maybe you want to try by yourself Problems 1,3,5,13,15,22, Section 1.2 ***] 1.2 More on the classification of the DE Let’s elaborate on the classification of DE. As we have seen (see the first definition in this chapter), there are different classifications according to different characteristics: 5 Chapter 1. Introduction 1. Systems of ODE: When we have more than one unknown, we have a system of ODE. For instance, one may consider the evolution of the population of wolves and rabbits in a prescribed area. Then the ODE problem reads: w′ = −w + wr, r ′ = r − wr. 2. Order: As we have seen, the order of the highest derivative present in the equation is called the order of the ODE. For instance y′ = y2 is a first order ODE, while y ′′ = −sin(y) n is a second order ODE. er oBe li nc hó 3. Linear vs. nonlinear: when the rate function is linear in the variables y, ...dn−1 y/dtn−1 , we have a linear equation. Otherwise the equation is nonlinear. For instance, y ′ = y 2 and y ′′ = − sin(y), are nonlinear equations, while G ra n y = y and y ′ = t5 y, R. are linear equations. 2- Let’s study an important example: Example 1.9 (Exercise 1.3.29). Obtain the equation of the pendulum. DR AF Tv Solution: Let’s assume that the rod has length 1, and the lentil has mass equal to 1. We further assume that the only force acting is the gravity (in particular, there is no friction). Then, the force is F = g · e2 , with e2 = (0, 1) (the gravity points downwards). We can write F as a component parallel to the rod and a component perpendicular to the rod. The latter component is the angular one. The component parallel to the rod balances with the force that the rod does to prevent the lentil to escape. Consequently, we only have the angular component playing a role. A little bit of basic trigonometry gives us that this angular component can be written as Fa = g sin(θ), where θ is the angle of the rod with the vertical (the y−axis). As this is a restoring force, we need to consider it with a − sign in front of it. We end with the pendulum equation: θ ′′ = −g sin(θ). 6 1.2. More on the classification of the DE As we have seen, this is a nonlinear, second order equation. However, observe that, if the angle θ is very small |θ| ≈ 0, we have sin(θ) ≈ θ [*** This is obtained by the Taylor’s Theorem. can you complete the details? ***] Consequently, we can approximate our original, nonlinear equation by a simpler, linear equation θ ′′ = −gθ. Let’s say some words about the concept of solution of a ODE. We say that f (t) is a solution to a given differential equation if, when we plug f (t) into the equation, the equation is satisfied. In this way, we obtain that f (t) = sin(t) is a solution to nc hó n θ ′′ = −θ. [*** Can you obtain another two solutions to this ODE? ***] dn−1 dn ′ ′′ g(t) − F (g(t), g (t), g (t), ..., g(t), t), dtn dtn−1 is defined for all t in I, R. G ra n 1. er oBe li These solution may be implicit or explicit: Definition 1.3. Let g(t) be a function defined on an interval I, having the n-th derivative for all t in I. g(t) is called an explicit solution of the equation (1.1) if 2. AF Tv 2- dn−1 dn ′ ′′ P (t) − F (P (t), P (t), P (t), ..., P (t), t) = 0, dtn dtn−1 DR for all x in I. Example 1.10. Check that g(t) = et is a solution to y ′ = y. Solution: We have g′ = g, so we conclude. Definition 1.4. A relation H(t, y(t)) = 0 is called an implicit solution of the ODE (1.1) if this relation produces at least one function g(t) defined on the interval I, such that g(t) is an explicit solution of (1.1) on I. Example 1.11. Find an implicit solution of y ′ = − yt . Solution: This expression is equivalent to 2y ′ y = −2t, so, by the chain rule, d 2 y = −2t. dt 7 Chapter 1. Introduction We can integrate and we get y 2 (t) + t2 = C. Our implicit solution is then H(t, y(t)) = y 2 (t) + t2 − C. DR AF Tv 2- R. G ra n er oBe li nc hó n [*** Maybe you want to solve Exercises 7-14, Section 1.3 ***] 8 Chapter 2 nc hó In this chapter we study equations of order one, i.e. of the form n First order ODE er oBe li y ′ (t) = f (t, y(t)). Example 2.1. Solve ra n d ((t + 5)y(t)) = t. dt AF Tv The LHS is 2- R. G Solution: We integrate both sides to get Z t Z t t2 d sds = . ((s + 5)y(s)) ds = 2 0 0 ds ((t + 5)y(t)) − (5y(0)) . DR Consequently, y(t) = (5y(0)) + t+5 t2 2 . In this example, we have used that the LHS was an exact derivative. We are going to use the same idea in the following section. 2.1 Integrating factors Let’s assume that we have the following equation P ′ (t) = −αP (t) + e−αt , P (0) = P0 . We are going to use the Fundamental theorem of Calculus with the chain rule to solve this equation. The idea is to find a function (the integrating factor) such that, by multiplying by this function, we get an exact derivative on the left hand side. We multiply the equation 9 Chapter 2. First order ODE by µ(t). This function µ(t) is called in integrating factor and remains unknown at this step (it’s part of our job to find it!). µ(t)P ′ (t) = −αµ(t)P (t) + µ(t)e−αt , P (0) = P0 . Notice that (by product rule) d (µ(t)P (t)) = µ′ (t)P (t) + µ(t)P ′ (t), dt so, if we impose µ′ (t) = αµ(t), we have Solving the equation for µ(t), we have er oBe li µ(t) = µ(0)eαt . nc hó n µ(t)P ′ (t) + αµ(t)P (t) = µ(t)P ′ (t) + µ′ (t)P (t) = µ(t)e−αt , P (0) = P0 . We may take µ(0) = 1 to simplify (in fact, this choice does not affect the next computations [*** why? ***]). Inserting the expression for µ(t), we have R. We integrate both sides and we get G ra n d (µ(t)P (t)) = µ(t)e−αt , P (0) = P0 . dt AF Tv 2- P (t) = (P0 + t)e−αt . DR Let’s assume now that we have the following equation (see Exercise 38, Section 2.1) P ′ (t) = −α(t)P (t) + f (t), P (0) = P0 . (2.1) We are going to use the Fundamental theorem of Calculus with the chain rule to solve this equation. As before, the idea is to find a function (the integrating factor) such that, by multiplying by this function, we get an exact derivative on the left hand side. Notice that Z d t α(s)ds = α(t), dt 0 and notice that d β(t) e P (t) = eβ(t) P ′ (t) + P (t)β ′ (t)eβ(t) . dt Our equation can be written P ′ (t) + α(t)P (t) = f (t), P (0) = P0 , so, if we multiply both sides by Rt e 0 α(s)ds 10 , 2.1. Integrating factors we get Rt α(s)ds Rt Rt α(s)ds α(t)P (t) = e 0 α(s)ds f (t), P (0) = P0 . Rt Now we use the previous formula with β(t) = 0 α(s)ds, and we get e 0 P ′ (t) + e 0 Rt d R t α(s)ds e0 P (t) = e 0 α(s)ds f (t). dt Integrating Rt e 0 α(s)ds Z t Ru e 0 α(s)ds f (u)du + C, P (t) = 0 P (t) = e 0 α(s)ds To fix the constant, notice that − 0 α(s)ds t Ru e 0 α(s)ds f (u)du + C . 0 Z 0 Ru e 0 (2.2) α(s)ds f (u)du + C 0 = C. er oBe li P (0) = P0 = e R0 Z n − Rt nc hó Example 2.2. Solve P ′ (t) = −4P (t) + f (t), P (0) = P0 . ra n Solution: We need to find the integrating factor. As before, notice that the equation reads G P ′ (t) + 4P (t) = f (t), P (0) = P0 , R. and, multiplying by e4t we get, we obtain Z DR Integrating and using AF Tv 2- e4t P ′ (t) + 4e4t P (t) = e4t P (t) = Z t 0 d 4t e P (t) = e4t f (t), P (0) = P0 . dt d 4t e P (t) = e4t P (t) + C̃, dt e4s f (s)ds + C ⇒ P (t) = e−4t Z t (2.3) e4s f (s)ds + e−4t C, 0 where C = −C̃ is an arbitrary constant that we have to find. To find this constant we use the initial data: Z t −4t P0 = C ⇒ P (t) = e e4s f (s)ds + e−4t P0 . 0 Notice that with this method of finding the solution we end by finding a constant using the initial data. In our previous examples, this constant never appear explicitly as the limits of our integration process where fixed (compare with (2.3)). Of course, both method are correct (if they are correctly used!) and lead to the same answer. In the example 2.5 we are going to explain this further. 11 Chapter 2. First order ODE [*** You should become familiar with this idea of looking for an exact derivative. We are going to use the same approach many, many times. So, remember, every time that you are asked to find the solution to an ODE, try first to see if there is some easy way to obtain some exact derivative on the LHS. If this does not work, then you apply your other ideas. ***] [*** Maybe you want to solve Problems 1,2,13,14,21,28,29,38 Section 2.1 ***] One step forward 2.1 (SIR models for epidemics). Let’s assume that we have a population that can be split in three subsets: • Susceptible (S) • Infected (I) nc hó er oBe li The flow of an epidemic is n • Recovered (they can’t get sick again)(R) S → I → R. The model is AF Tv 2- R. G ra n  dS   = −rS(t)I(t)    dt dI (2.4) = rS(t)I(t) − βI(t)  dt     dR = βI(t) dt where r is the rate of new infected coming from encounters between infected individuals and susceptible individuals and β is the amount of people that are healthy again. Example 2.3. Use the integrating factor method to compute the number of infected individuals: DR Solution: Multiplying by eβt , we get dI + βeβt I(t) = rS(t)eβt I(t), dt d βt e I(t) = rS(t)eβt I(t), dt Z t −βt −βt I(t) = I(0)e +e r S(s)eβs I(t)ds, eβt 0 2.2 Separable equations Definition 2.1. An ODE of the form f (y(t))y ′ (t) = g(t) is called separable. 12 2.2. Separable equations Example 2.4. Solve y ′ (t) = t2 . y(t) Solution: Notice that we can write y(t)y ′ (t) = t2 , so d (y(t))2 = t2 . dt 2 (y(t))2 t3 (y(0))2 − = . 2 3 2 nc hó Integrating, we have n [*** As you see, we are using one more time the idea of looking for an exact derivative. So, take that in mind when solving an ODE! ***] ra n er oBe li The good thing about these equations is that you always can reduce them to a simple integration procedure. To see this, let’s write F, G for the primitive functions of f, g respectively: F ′ (x) = f (x), G′ (x) = g(x). We have G f (y(t))y ′ (t) = F ′ (y(t))y ′ (t) = G′ (t) DR AF Tv 2- R. [*** Do you see an exact derivative here? ***] Now notice that, applying the chain rule, d F (y(t)) = F ′ (y(t))y ′ (t), dt so d F (y(t)) = G′ (t), dt and integrating, F (y(t)) − G(t) = C ≡ F (y(t0 )) − G(t0 ). (2.5) In other words, we have found an implicit representation of the solution. This kind of expressions are called implicit because, in general, they can not be written in the form y(t) = H(t). [*** Notice that, without regarding how intricate are the expressions for f and g, we can always do this computation. Of course, this is sort of cheating in the sense that, in general, we can not use (2.5) to write explicitly the expression for the dependent variable, y, as a function of the independent variable, t. ***] Recall that y ′ = dy dt , so we can rearrange the expression of a separable ode as follows: f (y(t))y ′ (t) = g(t) ⇒ f (y(t))dy = g(t)dt. [*** Notice that in the latter formula, the LHS only involves explicitly y, while the RHS only involves t. ***] 13 Chapter 2. First order ODE Example 2.5. Solve the differential equation y ′ (x) = y 5 (x), y(0) = 2. Can you say something on its behaviour for large times? Solution: We are going to solve this equation following two methods. 1. First we are going to leave the limits of integration ’unknown’, so a constant will appear and we have to solve for the constant using the initial data. We have dy dy = (y(x))5 ⇒ 5 = dx. dx y Integrating n y(x) = nc hó So dy 1 y −4 =x+C ⇒ = (−4C − 4x)1/4 = 5 y −4 y(x) 1 . (−4C − 4x)1/4 er oBe li Z To fix the constant we use the initial value: 1 1 . = 2 ⇒ 4C = −16 (−4C)1/4 ra n y(0) = G We conclude 1 ( 16 1 . − 4x)1/4 2- R. y(x) = So DR AF Tv 2. Now we fixed the limit of integration (so the constant appearing is perfectly known). Integrating Z y(t) dy (y(t))−4 (y0 )−4 = − =x y5 −4 −4 y0 y(x) = 1 ( 16 1 . − 4x)1/4 Now notice that initially x = 0 and x grows continuously. So, eventually 4x will approach 1 16 and then lim y(x) = ∞. 1 x→ 64 Consequently, there is no solution for large times!! [*** In this example we have used two different approaches to solving the ODE, each of them is perfectly correct. However, there are a number of common errors. Using the method with unknown limits of integration: This method is rather straightforward, but one can not forget the constant of integration. Moreover, you have to solve for the constant correctly to obtain the correct solution. 14 2.2. Separable equations Using the method with known limits of integration: One can argue that this method is (mathematically) more rigorous. However, one need to write the appropriate limits in the appropriate integral, i.e. you can not get confused with the limit in the dependent and the independent variables. Both methods are perfectly fine. It is up to you which one you want to use. ***] Example 2.6. Solve y ′ (x) = log(y(x))y(x), y(0) = y0 > 0. Solution: This equation can be written as y′ = 1. y log(y) has derivative er oBe li y′ d f (y(x)) = , dx log(y)y so, the equation reads, ra n d log(log(y(x))) = 1. dx R. G We integrate both sides to get 2- log(log(y(x))) − log(log(y0 )) = log AF Tv so DR and finally nc hó f (y(x)) = log(log(y(x))) n As always, we are going to look for an exact derivative on the LHS. Notice that log(y(x)) log(y0 ) = x, log(y(x)) = log(y0 )ex , x y(x) = y0e . Solution: [Example 2.6] We are going to solve Example 2.6 in a two-step procedure. Notice that if z = log(y), then using the chain rule, dz = dy , y so the equation in this new variable reads dz = dx. z Integrating, we have Z z(t) z(0) dz = log(z(t)) − log(z(0)) = x = z 15 Z 0 x dx. Chapter 2. First order ODE Using the properties of the logarithm z(t) = z(0)ex . We have to undo the change of variables, so x log(y(x)) = log(y0 )ex ⇒ y(x) = y0e . Example 2.7. Solve y ′ (t) = 1 , y(1) = π. cos(y(x)) Solution: We write our equation as n cos(y)dy = dt. er oBe li nc hó Notice that we have an exact derivative on the LHS. Consequently, we can easily integrate Z t Z y(t) dt. cos(y)dy = sin(y(t)) − sin(y(1)) = t − 1 = y(1)) So, 1 sin(y(t)) = t − 1, ra n and G y(t) = arcsin(t − 1). 2- R. [*** For which time t is this y(t) well defined? ***] Example 2.8. Solve y ′ (t) = t2 + 1, AF Tv and determine the interval in which the solution exists. DR Solution: We write the equation as dy = (t2 + 1)dt. [*** Notice that in this problem we don’t have an initial time or an initial data. In particular, we can not know the limit of integration. Consequently, in this case we are forced to use the method with the unknown constant. Integrating we have t3 + t + C. y(t) = 3 As the RHS side is always well-defined, y(t) can be defined for every time t. ***] [*** I would suggest the following exercises: 1-21, 30 section 2.2 ***] 2.3 Existence, uniqueness and modeling This section corresponds to Sections 2.3 and 2.4 in the textbook. 16 2.3. Existence, uniqueness and modeling 2.3.1 Modeling As we have seen previously, the differential equations are useful for non-mathematicians due to its wide range of applications. In particular, as an example, we have seen how the ODE can be applied to medical modeling in the SIR system. In this section we are going to develop some easy models with a single equation and finally, we are going to introduce a model of a zombie outbreak with a system of equations. Let’s consider a population of some kind of animal. When the growth is restricted (for instance because the space is finite or the available resources are finite), the Malthus’s Law doesn’t apply and should be modified. Example 2.9. Obtain a model of the population with the following hypotheses: 1. the rate of change of the population is a linear function of the population. nc hó n 2. there is growth restriction. er oBe li Solution: Let’s write P (t) for the number of animals in the population. We want to define a function F (P (t)) such that 1. F (P ) is a linear function of P . ra n 2. there exists a number N such that F (N ) = 0. Or, in other words, there is growth restriction. G 3. F (P ) > 0 if 0 < P < N . In other words, the population should grow if its number is still small. R. With these hypotheses we get (2.6) AF Tv 2- dP = r(N − P (t)), P (0) = P0 , dt DR where r > 0 is the rate of growth and N is the maximum amount of individuals that the environment may hold. Example 2.10. Solve (2.6). Solution: We can solve the ODE as before, − dP = −rdt ⇒ ln(|N − P |) = −rt + C ⇒ N − P (t) = e−rt+C , N −P where C is the constant appearing from the integration procedure. We need to find this constant C. We have P (t) = N − e−rt+C , P (0) = N − eC = P0 ⇒ ln(N − P0 ) = C. We conclude P (t) = N − e−rt (N − P0 ). Remark 2.1. This equation is called von Bertalanffy eq. and it can be used to describe the length of some fish. 17 Chapter 2. First order ODE [*** Before continue, you should read carefully the worked examples in section 2.3. ***] One step forward 2.2 (A model for a zombie outbreak). We are going to explain the model by P. Munz, I, Hudea, J. Imad, R. Smith? (notice that the ’ ?’ is not a typo, it’s his real name), When zombies attack!: mathematical modelling of an outbreak oz zombie infection. Now the population can be split in three subsets: • Susceptible (S) • Zombie (Z) • Removed (R) nc hó n The removed set is formed by the people who recently died and defeated zombies. A zombie can appear from two different situations: er oBe li 1. Resurrected from the recently deceased (from R group). 2. People recently bitten by a zombie (from S group). ra n A zombie can move from group Z to group R if its brain is destroyed. The flow of this situation is more complicated than the standard SIR model: /Z O ❄❄ ❄❄ ❄❄ ❄ R 2- R. G S❄ DR AF Tv The system for this situation is, where α, β, γ, δ are positive constants,  dS   = −αS(t)Z(t) − βS(t)   dt  dZ = αS(t)Z(t) − γS(t)Z(t) + δR(t)  dt   dR   = βS(t) + γS(t)Z(t) − δR(t) dt where αS(t)Z(t) the rate of alive people that are bitten, (2.7) βS(t) the rate of alive people that die by other reasons (traffic accident, say), γS(t)Z(t) the rate of zombies that die (again), δR(t) the rate of dead people that resurrect as a zombie. [*** Define N (t) = S(t) + Z(t) + R(t) as in (2.7). Study its evolution in time and define a new system, equivalent to (2.7) with only two unknown. (HINT: Compare (2.4) and (2.10)) ***] [*** Prof. Smith? has a model of the ’Bieber fever’ that you may find interesting, or, at least, curious. ***] 18 G ra n er oBe li nc hó n 2.3. Existence, uniqueness and modeling Existence and uniqueness AF Tv 2.3.2 2- R. Figure 2.1: Dr. Zombie by Jorge David (Wikipedia) DR We want to know if a given ODE has a solution and if this solution is unique. There are several reasons for that, but the more practical is that we don’t want to work hard trying to solve an ODE with no solution. Another very important reason is that, if a given ODE appearing in physics, let’s say, has more than one solution, then we can not forecast the behaviour of the physical system. Let’s consider first a linear ODE y ′ + α(t)y(t) = f (t), y(t0 ) = y0 . (2.8) Then, as we have seen in Section 2.1, we have the following theorem: Theorem 2.1. Let α(t), f (t) be continuous functions in an interval I = [a, b] such that t0 ∈ I, then there exists a unique function y(t) (see (2.2)), defined for t ∈ I and satisfying (2.8). Now, using that we have an explicit solution for the linear case we can extract several consequences from it. [*** 1. Assuming that the coefficients α(t), f (t) are continuous, there is a general solution, containing an arbitrary constant (which depends on the initial 19 Chapter 2. First order ODE data), that includes all solutions of the differential equation: Z t R R u − 0t α(s)ds α(s)ds e0 y(t) = e f (u)du + C . 0 2. The solution y(t) is as good (or as bad) as the coefficients α(t), f (t). In particular, if α(t), f (t) are continuous for every t, then the function y(t) is defined and is differentiable for every t. ***] If instead of a linear equation we have the nonlinear equation y ′ = f (y(t), t), y(t0 ) = y0 , (2.9) er oBe li nc hó n the situation is much more involved. We have Theorem 2.2. Let the functions f and ∂f ∂y be continuous in some rectangle [a, b] × [c, d] containing the point (t0 , y0 ). Then, in some (maybe very small) interval t0 − h < t < t0 + h contained in [a, b], there is a unique solution y(t) of the initial value problem (2.9). [*** Recall that the uniqueness implies that two integral curves can not touch each other! ***] ra n Let’s study an important example showing a bad behaviour: Example 2.11. Study the following initial value problem R. G y ′ = y 1/3 , y(0) = 0. DR and integrating, AF Tv 2- Solution: This ODE is separable, an it can be solved with these methods. We have dy = dt, y 1/3 1.5(y(t))2/3 = t 3/2 2 y(t) = . t 3 3/2 Now notice that y(t) = 0 is also a solution. Furthermore, − 32 t . is also a solution. We see that we can construct a whole bunch of different solutions just by glueing together these three basic bricks. Finally, notice that f (y) = y 1/3 has no a well defined partial derivative, so the Theorem does not apply to this case. [*** There are a bunch of important remarks concerning nonlinear equations that we should keep in mind. 1. If the theorem can not be applied, we can have examples where the uniqueness blows up (so, more than one solution!). See for instance Example 2.11. 20 2.4. Equations from population dynamics 2. If the theorem can not be applied, we can have examples where there is no solution. 3. We can have a unique solution that exists for a small time. instance Example 2.5. See for ***] We observe that there are few similarities between the behaviour of linear and non-linear ODE. Furthermore, almost none of the statements that work for linear eq. actually work for nonlinear eq. Maybe the one that we should take more cautiously is the lack of global existence even for very smooth and well-behaved rate functions f (y, t). Equations from population dynamics nc hó 2.4 n [*** Maybe you want to try the exercises: 1-12, 25,27 Section 2.4 ***] er oBe li This is the material corresponding to Section 2.5 in the textbook. Definition 2.2. If the rate function f depends explicitly only on y, i.e. f (t, y) ≡ f (y), then the ODE y ′ (t) = f (y(t)), y(t0 ) = y0 , ra n is called autonomous. DR and its variation is AF Tv 2- R. G Notice that this equation is always separable (see Definition 2.1. Definition 2.3. Given f (y), the zeroes of f (y) ( i.e. the points yi such that f (yi ) = 0) are called critical points or equilibrium solutions. One step forward 2.3. Previously we introduced the SIR model (2.4). Notice that the total population is N (t) = S(t) + R(t) + I(t), d d d d N (t) = S(t) + R(t) + I(t) = using the system = 0. dt dt dt dt So, the total population doesn’t change. Then as the total population doesn’t change, N (t) = N (0) = S(0) + R(0) + I(0) ⇒ R(t) = N (0) − I(t) − S(t). To simplify, let’s take N (0) = 1 (thus, S, R, I are percentages). The system (2.4) now reads    dS = −rS(t)I(t) dt (2.10) dI   = rS(t)I(t) − βI(t) dt The equilibria are solutions of −rSI = 0 and rSI − βI = 0, 21 Chapter 2. First order ODE s’=−rsi i’=rsi−bi r = 0.2 b = 0.1 Modelo S−I−R 1 0.9 0.8 0.7 i 0.6 0.5 0.4 0.3 0.2 n 0.1 0.1 0.2 0.3 0.4 0.5 s 0.6 0.7 0.8 0.9 1 er oBe li 0 nc hó 0 ra n Figure 2.2: SIR Model 2- R. G so, from the first equation, S = 0 or I = 0, and, from the second equation, I(rS − β) = 0. Thus, writing (S, I) (0, 0), (β/r, 0). AF Tv Then notice that the evolution of the amount of infected people depends on the amount of healthy people in a straightforward way: DR if S(t) = d β ⇒ I(t) = 0, r dt β d ⇒ I(t) > 0, r dt d β if S(t) < ⇒ I(t) < 0. r dt So, we conclude a very interesting fact: if initially S(0) > if S(t) > β ⇒ the number of infected individuals grows, r β ⇒ the number of infected individuals decays. r This means that a vaccination procedure that ensure a low number of S (depending on the particular illness) is enough to guarantee the health of the group. So, we don’t need to vaccinate everyone in the population, only the required amount. S(0) < [*** Can you find the equilibria for the system (2.7)?. ***] 22 AF Tv 2- R. G ra n er oBe li nc hó n 2.4. Equations from population dynamics 2.4.1 DR Figure 2.3: a) Cable about the muskrats to the New York Times b) A muskrat. Malthus Law Let’s remember Malthus’ Law with a practical example: Example 2.12. In Figure 2.3, we can see how the muskrats appear in Europe. The question is: Assuming that the muskrat population grows according to Malthus law, estimate using the numbers and dates in the cable to The New York Times the constant of proportionality/rate of growth r. Solution: We take the initial time 1905. We have P (0) = the population in Czech Republic at 1905 = 20, P (9) = the population in Czech Republic at 1914 = 200000. Thus, using the formula for the solution, we have P (9) = 200000 = P0 er9 = 20e9r , 23 Chapter 2. First order ODE so, 10000 = e9r ⇒ ln(10000)/9 = r ≈ 1.02. Recall the von Bertalanffy eq. (2.6) from the previous section. The solution to this equation is P (t) = N − e−rt (N − P0 ). We notice that lim P (t) = N, t→∞ nc hó n and also that if P (t) = N then P ′ (t) = 0. This is an example of an extremely important concept in differential equations: Definition 2.4. Let P ′ (t) = F (P (t)) be the considered ODE. Assume that y ∈ R is a number such that F (y) = 0, er oBe li then y is called an equilibrium point. ra n These equilibria can be stable (i.e. the solution moves towards them) or unstable (i.e. the solution try to avoid them). Let’s state these concepts in a rigorous way: Definition 2.5. Let P ′ (t) = F (P (t)) be the considered ODE. Assume that y ∈ R is an equilibrium point. Compute F ′ (y). Then G • if F ′ (y) < 0 the equilibrium is stable. R. • if F ′ (y) > 0 the equilibrium is unstable. 2- Now we see that y = N is an equilibrium and F ′ (y) = −r < 0, so it’s stable. DR AF Tv [*** The geometric interpretation of the previous conditions is the following: recall that F (y) = 0 (because we are evaluating in a critical point). Then, in the case of stable equilibrium according to the previous definition, we have F ′ (y) < 0. This implies that the function is (localy) decaying. In particular, for P < y close enough we have F (P ) > 0 and if y < P F (P ) < 0. In other words P ′ (t) = F (P (t)) > 0 if P (t) < y, P (t) close to y, P ′ (t) = F (P (t)) < 0 if P (t) > y, P (t) close to y. And we obtain that P (t) approaches y. We can apply the same reasoning to the unstable equilibrium. ***] 2.4.2 Logistic growth Now the hypotheses are 1. if the population is small, its change is proportional to itself. 2. there is growth restriction. 3. as the population grows, the effect of the growth restriction is higher. 24 2.4. Equations from population dynamics It is then when we use the logistic equation: dP P (t) = rP (t)(1 − ), P (0) = P0 . dt N where r > 0 is the rate of growth and N is the maximum population. Both parameters are data. As before, we have a separable equation, so dP P− P2 N dP Z P− P2 N = rdt, = Z rdt, P N 1 dP = rt + C, P er oBe li 1− + nc hó 1 N Z n using partial fractions, we get where C is the constant appearing from the indefinite integration. Thus, P N P (t) | + ln |P | = rt + C ⇒ = ert+C , N N − P (t) N ert+C . N + ert+C R. G P (t) = ra n − ln |1 − AF Tv 2- We need to find the constant C. We use that at t = 0 we have P (0) = P0 , so N eC N P0 N P0 C P (0) = ⇒ = e ⇒ C = ln . N + eC N − P0 N − P0 DR When we introduce this expression for C we get P (t) = P0 N ert NN−P 0 P0 N + ert NN−P 0 = 1 N P0 ert N −P 0 0 1 + ert N P−P 0 = N P0 ert . N + (ert − 1)P0 This will be our final expression. In the Figure 2.4, you can see the evolution of solutions corresponding to different initial data, P0 . Example 2.13. Let’s assume that the world population follows a logistic growth. Using the table Year 1990 1995 2000 2005 2010 Population (millions) 5263 5674 6070 6454 6972 25 Chapter 2. First order ODE Logística x ’ = r x (1 − x/N) r=1 N=5 10 9 8 7 x 6 5 4 3 2 1 0 0 1 2 3 4 5 t 6 7 8 9 10 nc hó n Figure 2.4: Solutions corresponding to different initial data, P0 er oBe li Compute the parameters r and N . What is your approximation to the human population in 2015? (The UN estimation is 7324 millions) Solution: Taking 1990 as our initial time we get P (0) = P0 = 5263. We have ra n 5263N er5 , N + 5263(e5r − 1) 5263N er10 . N + 5263(e10r − 1) G P (5) = 5674 = R. P (10) = 6070 = AF Tv 2- From these two equations we get N as a function of r in two equivalent ways: N= 5674 · 5263(e5r − 1) , 5263e5r − 5674 DR 6070 · 5263(e10r − 1) . 5263e10r − 6070 As both expressions are for the same value N (which is a unique number), we have an equation for r 5674 · 5263(e5r − 1) 6070 · 5263(e10r − 1) = . 5263e5r − 5674 5263e10r − 6070 Using e10r − 1 = (e5r + 1)(e5r − 1), after some simplifications, and N= 6070(e5r + 1) 5674 = , 5263e5r − 5674 5263e10r − 6070 5674(5263e10r − 6070) = 6070(e5r + 1)(5263e5r − 5674). If we write x = e5r the previous equation is a second order algebraic equation thar we can solve with the second order equation formula getting r ≈ 0.036. 26 2.4. Equations from population dynamics With this value of r we recover N ≈ 9400 million people. Using these two values we estimate that, in 2015, would be 7123 million people on Earth. Let’s notice that the equilibria for the logistic equation are y = 0 and y = N. In this case the derivative is d d (rP (1 − P/N )) = (rP − rP 2 /N ) = r − 2rP/N. dP dP nc hó n F ′ (P ) = When we evaluate F ′ (P ) at the equilibria, using r > 0, we get er oBe li F ′ (0) = r > 0 and F ′ (N ) = r − 2r = −r < 0, Let’s consider now the equation G Threshold R. 2.4.3 ra n so, y = 0 is an unstable equilibrium and y = N is a stable equilibrium (see Figure 2.4). AF Tv 2- y ′ = −y(1 − y), y(0) = y0 > 0. DR This equation corresponds to (14) in the page 85 of the textbook when r = T = 1. We have chosen r = T = 1 to simplify the exposition. You can read the textbook for the more general case. In this equation, the critical points are y1 = 0, y2 = 1. Now notice that f (y) = −y(1 − y) < 0, for 0 < y < 1. Consequently, in this region the solution approaches the equilibrium point y1 = 0. On the other hand, we have f (y) = −y(1 − y) > 0, for 1 < y, and we obtain that the solution runs away the equilibrium point y2 = 1. We conclude that y1 = 0 is stable while y2 = 1 is unstable. Furthermore, as the solution grows if initially is bigger than y2 = 1, we can think on y2 = 1 as a threshold level. [*** This equation is separable. Can you solve it? HINT: try to use the same approach as for the logistic case in the previous section. ***] [*** I would recommend exercises 1-6,16,17,23 Section 2.5 ***] 27 Chapter 2. First order ODE 40 Proporción de tiburones capturados 35 30 25 20 15 10 1 2 3 4 5 6 7 8 Años nc hó ra n er oBe li % sharks 11.9 21.4 22.1 21.2 36.4 27.3 16.0 15.9 R. G Year 1914 1915 1916 1917 1918 1919 1920 1921 n Figure 2.5: Evolution of the shark captures (percentage) The Lotka and Volterra predator-prey model AF Tv 2.4.4 2- Table 2.1: Shark captures DR One step forward 2.4. In the mid 20’s, an italian biologist, D’Ancona, was studying the variations in the populations of different fish in the Mediterranean sea. He noticed that from 1914 to 1918 the captures go from 11.9 to 36.4 %. Let’s recall that the Great War or World War I devastate Europe and Africa during the same years. D’Ancona ask Vito Volterra, an italian mathematician, about the problem. Volterra simplified the problem with some hypotheses: 1. There are only preys, F , and predators, S. 2. There is no growth restriction for the preys. This means that, in absence of sharks, the prey population grows according to Malthus law. 3. The number of encounters between preys and predators depends from the populations itself as F S. 4. In absence the preys, the predators starve and die with an exponential decay. 5. The predator population grows when the sharks are fed. 6. There is no other effect (like fishing...). 28 2.4. Equations from population dynamics With these hypotheses, the system of equations is    dF = F − SF dt dS   = −cS + SF dt To find the equilibria of this system we need to solve 0 = F − SF n 0 = −S + SF er oBe li nc hó The solutions are (0, 0) and (1, 1). The first point, the origin is not very interesting as it represent the absence of animals in the sea. Around the point (1, 1), we find closed curves (see Figure 2.6). This means that the fish and shark population oscillates (the qualitative behaviour is like sin(x) and cos(x)). prey ’ = (A − B predator) prey predator ’ = (D prey − C) predator B = 0.01 D = 0.005 ra n 80 G 70 R. 60 2- 50 40 AF Tv predator A = 0.4 C = 0.3 Lotka−Volterra 30 20 DR 10 0 0 20 40 60 prey 80 100 120 Figure 2.6: Fishes vs. Sharks To prove rigorously this qualitative behaviour is not so easy. So, we are going to address the problem in a different way. Let’s assume that the populations are S(t) = 1 + δs(t), F (t) = 1 + δf (t) with |δ| << 1. This means that initially our populations are close to (1,1). If we plug our assumption in the equations, we get d d d d S = δ s, F = δ f. dt dt dt dt 29 Chapter 2. First order ODE Now, we simplify δ ra n er oBe li nc hó    df = −s − sδf dt ds   = f + f δs dt As δ << 1 we can approximate the system by the simpler system    df = −s dt   ds = f dt n    δ df = 1 + δf − (1 + δs)(1 + δf ) dt ds   δ = −(1 + δs) + (1 + δs)(1 + δf ) dt    δ df = 1 + δf − (1 + δf + δs + δsδf ) dt ds   δ = −1 − δs + 1 + δf + δs + δf δs dt    δ df = −δs − δsδf dt ds   δ = δf + δf δs dt G Now, notice that, if we take a second derivative in any of the previous equation, we get 2- R. d d2 f = − s = by the second equation = −f. 2 dt dt AF Tv Recall that DR d2 sin(x) = − sin(x), dt2 so, we can expect (kind of ) periodic solutions close to the equilibrium (1,1). 2.5 Forward Euler method In general it’s very difficult (or even impossible) to find the exact, explicit solution of a given differential equation. As you can imagine, we are going to use the computer to find an approximate solution. The construction of numerical methods to approximate solutions of differential equations is an area of expertise by itself. Assume that we have the ODE P ′ (t) = F (P (t)), P (0) = P0 . Integrating we get P (t) − P (0) = Z 30 0 t F (P (s))ds. 2.5. Forward Euler method The problem is that, as we don’t have the expression for P (s), the RHS makes no sense. Then we are going to approximate the RHS as Z t F (P (s))ds ≈ tF (P (0)). 0 Memento 2.1. This is just some sort of Riemann sum taking the height of the rectangle equal to the left endpoint. Remember that the integral equals the are below the curve F (P (s)), so we are approximating this area by the are of a rectangle. To achieve a better accuracy, if we want to approximate the solution up to time T > 0, we are going to define a partition (with n subintervals) of the time interval F (P (s))ds ≈ (t1 − t0 )F (P (t0 )), t0 P (t2 ) − P (t1 ) = Z P (t3 ) − P (t2 ) = Z F (P (s))ds ≈ (t2 − t1 )F (P (t1 )), t1 t3 tn ra n F (P (s))ds ≈ (t3 − t2 )F (P (t2 )), G t2 tn−1 F (P (s))ds ≈ (tn − tn−1 )F (P (tn−1 )). 2- P (tn ) − P (tn−1 ) = Z t2 R. until nc hó t1 er oBe li and we are going to compute n step: Z P (t1 ) − P (t0 ) = n 0 = t0 < t1 = T /n < t2 = 2T /n, ..., tn = T, AF Tv This method is known as Forward Euler method. DR We can give another interpretation, more dynamical. We have seen that F gives us a slope field. Furthermore, we can think on the solution of an ODE as the trajectory of a particle according to this slope field. Using the Euler method, we are approximating the trajectory by a piecewise straight trajectory. In other words, we can think that we are moving by big steps. At every time ti , we look our slope field, modify if needed our direction to be F (P (ti )) and then we give another step. Let me show with an example why one should know about differential equations before approximating it solution. Example 2.14. Consider y ′ (t) = y(t)2 , y(0) = 1. Approximate this equation using the forward Euler method and compare with the exact solution. Solution: First, let’s compute the solution. As before, dy 1 1 = dt ⇒ − = t + C ⇒ y(t) = . 2 y y(t) −t − C 31 Chapter 2. First order ODE 90 80 70 60 50 40 30 20 10 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1 ⇒ C = −1, −C ra n y(0) = 1 = er oBe li Now we use the initial data to fix the constant C: nc hó n Figure 2.7: Exact solution (red) and approximate solution (blue) corresponding to the same initial data, P0 = 1. G so R. y(t) = 1 . 1−t AF Tv 2- If we use Matlab to approximate the solution and we compare with the exact solution obtained before, we get the Figure 2.7. Notice that the approximate solution (blue) EXISTS AFTER t = 1!!! and we know that the exact solution (red) doesn’t!!! DR %% Forward Euler Method %% Number of time steps N=50; %% Final time T=2; dt=T/(N-1); t=[0:dt:T]; %% Initial data y0=1; y(1)=y0; for j=1:length(t)-1 y(j+1)=y(j)+((y(j))^2)*dt; end 32 2.6. Existence Theorem revisited 2.6 Existence Theorem revisited This section corresponds to Section 2.8 in the textbook. Let’s consider the following ODE y ′ = f (t, y), y(0) = y0 . As f may be nonlinear, we need to apply the following theorem Theorem 2.3. Let the functions f and ∂f ∂y be continuous in some rectangle [a, b] × [c, d] containing the point (t0 , y0 ). Then, in some (maybe very small) interval t0 − h < t < t0 + h contained in [a, b], there is a unique solution y(t) of the initial value problem (2.9). nc hó n We are going to study some ideas coming from the proof of this result. First, notice that the ODE can be written as Z t f (s, y(s))ds. y(t) = y0 + er oBe li 0 2- R. G ra n If y(t) is a smooth function, then both approaches are equivalent. We are going to use the integral formulation to construct a sequence of approximate solutions. The limit of this sequence will be our desired solution. This method is called of successive approximations or Picard’s iterates. We start with φ0 = y0 , Z t f (s, φ0 (s))ds, φ1 = y0 + 0 AF Tv φn+1 = y0 + Z t f (s, φn (s))ds. 0 DR [*** Notice that if n = ∞, we have φ∞ = y0 + Z t 0 f (s, φ∞ (s))ds, and this φ∞ is a solution to our original ODE in the integral formulation. ***] Example 2.15 (Exercise 4, Section 2.8). Solve y ′ = −y − 1, y(0) = 0, using the method of successive approximations. Solution: We have φ0 (t) = 0, Z t φ0 + 1ds = −t, φ1 (t) = − 0 φ2 (t) = − Z 0 t 1 − sds = −t + t2 /2, 33 Chapter 2. First order ODE φ3 (t) = − Z 0 t 1 + (−s + s2 /2)ds = −t + t2 /2 − t3 /6. Consequently, we can guess that φn (t) = −t + t2 /2 − t3 /6 + ... − tn /n!, if n is odd, and φn (t) = −t + t2 /2 − t3 /6 + ... + tn /n!, if n is even. n We can check that our guessing is correct easily. Let’s compute φn+1 assuming that we can use our formulas for φn . Let’s start with the case n odd. We have Z t 1 + (−s + s2 /2 − s3 /6 + ... − sn /n!)ds φn+1 (t) = − 0 = − t − t2 /2 + t3 /6 − ... − tn+1 /n!(n + 1) . nc hó As n + 1 should be even (recall that n was odd), this is the appropriate formula. We can proceed similarly with the case n even [*** Can you finish the details? ***]. er oBe li We know the following Taylor’s expansion ex = 1 + x + x2 /2 + x3 /6 + ... As a consequence ra n e−x = 1 − x + x2 /2 − x3 /6 + ... G It seems that R. φ∞ = e−t − 1 2- Now, let’s try to solve the original ODE by means of the integrating factors. We have AF Tv µ(t)y ′ + µ(t)y = −µ(t), so µ′ = µ, DR and we impose µ(t)y ′ + µ(t)y = µ(t)y ′ + µ′ (t)y = d (µ(t)y(t)) . dt We know that µ(t) = et . We compute then and, integrating, so d t e y(t) = −et dt et y(t) = −et + 1, y(t) = e−t − 1, so, our successive approximation scheme was converging to the appropriate function. [*** Maybe you want to read carefully the example in the page 115 in the textbook and then try the problems 5-7 and 8 in the section 2.8. ***] 34 2.7. Discrete models 2.7 Discrete models This section corresponds to Section 2.9 in the textbook. In general, we have been studying the case where t lies in some interval (it’s a real number). Here we are going to introduce the case where t lies in some lattice (like the positive integers). This means that t is not a continuous variable. We consider now the problems yn+1 = f (yn ), y0 = α, n = 1, 2, 3, ... (2.11) n These kind of problems are known as difference equations. Notice that, for a discrete variable, the notion of differential does not exists (there is no limit process), so usually we replace the incremential quotient by the difference [*** Check again the Euler forward method ***]. Notice that, nc hó y1 = f (y0 ), y2 = f (y1 ) = f (f (y0 )) = f 2 (y0 ), er oBe li y3 = f (y2 ) = f (f (f (y0 ))) = f 3 (y0 ), yn+1 = f (yn ) = f n+1 (y0 ). Definition 2.6. A sequence of values {yn } such that yn+1 = f (yn ) = f n+1 (y0 ), G ra n is called a solution to the problem (2.11). A solution such that yn has the same value for every n is called an equilibrium solution. AF Tv 2- R. [*** To find the equilibrium solution in a discrete model (2.11), we look for solutions of the algebraic equation y = f (y). DR This corresponds to the equilibrium condition yn+1 = yn . ***] Example 2.16 (Fibonacci numbers). We consider the discrete problem yn+2 = yn+1 + yn , y0 = 1 = y1 = 1. Compute the first 5 elements in the solution. Solution: This constitutive relation gives us the famous Fibonacci numbers. We compute y0 = 1 = y1 = 1, y2 = y0 + y1 = 2, y3 = y2 + y1 = 3, y4 = y3 + y2 = 3 + 2 = 5, y5 = y − 4 + y3 = 5 + 3 = 8. This model was proposed by Leonardo Pisano (aka Fibonacci) as a model of a population of rabbits. Example 2.17 (Linear difference equation). We consider the discrete problem yn = ryn−1 + b, y0 = α. Compute the solution and study its large time behaviour depending on the value of r. 35 Chapter 2. First order ODE Solution: We have y1 = rα + b, y2 = ry1 + b = r(rα + b) + b, so, yn+1 = ryn + b = r n α + b + rb + r 2 b + ... + r n−1 b. In the case r = 1, we can easily get the expression for the solution: yn = α + nb. and Sn − Sn−1 = r n . er oBe li Using both expressions, we have nc hó n Let’s consider the case r 6= 1. Now notice that if we write Sn−1 = 1 + r + r 2 + ...r n−1 , we have Sn = rSn−1 + 1, Sn − Sn−1 = rSn−1 + 1 − Sn−1 = r n , rn − 1 . r−1 ra n and G Sn−1 = R. Consequently, the solution is AF Tv 2- yn = r n α + rn − 1 b. r−1 Using the expressions that we have computed, we obtain that DR • if r = 1, yn is unbounded. • if |r| ≤ 1, yn → −1 r−1 b.(Notice that r n → 0 in this case) • if |r| > 1 or r = −1, then yn is unbounded (unless α + b/(r − 1) = 0). If r ≤ −1, the solution oscillates. Let’s focus now in a nonlinear example: yn = kyn−1 (1 − yn−1 ). Notice that φ1n ≡ 0, and 1 φ2n ≡ 1 − , k are equilibrium solutions. Let’s see how this system evolves using the computer. 36 2.7. Discrete models DR AF Tv 2- R. G ra n er oBe li nc hó n % We study numerically the system Xn+1=kXn(1-Xn) disp(’---First Experiment:---’) k1=input(’Give me the value of $k$ from 0 to 4:’); x0=input(’Give me the initial value from 0 to 1:’); x1(1)=x0; for i=1:100 x1(i+1)=k1*x1(i)*(1-x1(i)); end disp(’---Second Experiment:---’) k2=input(’Give me the value of $k$ from 0 to 4:’); y0=input(’Give me the initial value from 0 to 1:’); y1(1)=y0; for i=1:100 y1(i+1)=k2*y1(i)*(1-y1(i)); end disp(’---Third experiment:---’) k3=input(’Give me the value of $k$ from 0 to 4:’); z0=input(’Give me the initial value from 0 to 1:’); z1(1)=z0; for i=1:100 z1(i+1)=k3*z1(i)*(1-z1(i)); end subplot(4,1,1) plot(x1);title(’First experiment’); subplot(4,1,2) plot(y1);title(’Second experiment’); subplot(4,1,3) plot(z1);title(’Third experiment’); subplot(4,1,4) plot(z1,’r’); hold on plot(x1); hold on plot(y1,’k’);title(’All together’); hold off a=input(’Press a key’) disp(’Lets plot a bifurcation diagram’) K=0:0.01:4; % Different values of k X=zeros(length(K),5000);; X(:,1)=0.3; % the initial data (the same for every experiment) for j=1:length(K); for i=1:5000 X(j,i+1)=K(j)*X(j,i)*(1-X(j,i)); end end 37 Chapter 2. First order ODE Y1=X(:,4000:end); figure plot(Y1);title(’Bifurcation diagram’) First experiment 2 1 0 −1 0 20 40 60 80 100 120 80 100 120 80 100 120 Second experiment 0.5 0.48 0.46 0.44 0 20 40 60 Third experiment 0.6 0.55 0.5 0.45 0 20 40 60 All together 0.6 0 20 40 60 80 100 120 er oBe li 0.4 nc hó n 0.5 Figure 2.8: a) k = 2, y0 = 0.5,b) k = 2, y0 = 0.45, c) k = 2, y0 = 0.55 First experiment ra n 0.8 0.6 0.4 0 20 40 60 80 100 120 80 100 120 80 100 120 80 100 120 G Second experiment 0.8 0.4 0 R. 0.6 20 40 60 Third experiment 0.6 0.4 0 0.8 DR 0.6 AF Tv 2- 0.8 0.4 0 20 20 40 60 All together 40 60 Figure 2.9: a) k = 3, y0 = 0.5,b) k = 3, y0 = 0.45, c) k = 3, y0 = 0.55 Now let’s explain the last picture. The last picture shows the large time behaviour of the solution for a given value of k. We see that if 0 ≤ k ≤ 1, the equilibrium solution φ1n ≡ 0 is stable. Then if k is bigger than 1, the stability changes and now is the other equilibrium solution φ2n the one that is stable. Later on, if k ≥ 3, the stability changes again and there is a periodic solution (with period two). Finally, for k close to 4 (around 3.6) we have chaotic behaviour. This means that the dynamics is very complicated and it seems that there is a lack of patterns. Another key characteristic of the chaotic regime is that it is very sensitive to the initial conditions. This means that two very similar initial conditions lead to very different solutions in a very short time. The chaotic behaviour is not limited to discrete models, and there are many examples of chaotic behaviour in nonlinear systems of ODEs. 38 2.7. Discrete models First experiment 1 0.5 0 0 20 40 60 80 100 120 80 100 120 80 100 120 80 100 120 Second experiment 1 0.5 0 0 20 40 60 Third experiment 1 0.5 0 0 20 40 60 All together 1 0.5 0 0 20 40 60 nc hó n Figure 2.10: a) k = 3.7, y0 = 0.5,b) k = 3.7, y0 = 0.45, c) k = 3 − 7, y0 = 0.55 Bifurcation diagram 1 er oBe li 0.9 0.8 0.7 0.6 ra n 0.5 G 0.4 0.3 R. 0.2 0 50 100 150 200 250 300 350 400 450 AF Tv 0 2- 0.1 DR Figure 2.11: k from 0 to 4, y0 = 0.3. [*** Maybe you want to try the exercises 1-4 and 7,8 in Section 2.9 ***] 39 DR AF Tv 2- R. G ra n er oBe li nc hó n Chapter 2. First order ODE 40 Chapter 3 nc hó In this chapter we study equations of order two, for instance n Second order ODE er oBe li y ′′ (t) + g(t)y ′ (t) + h(t)y(t) = f (t). (3.1) ra n We start with a couple of definitions: Definition 3.1. When f (t) ≡ 0, equation (3.1) is called homogeneous. Otherwise, it is called non-homogeneous. G We are going to focus or attention in the homogeneous case with constant coefficients: R. ay ′′ (t) + by ′ (t) + cy(t) = 0. (3.2) DR is given by AF Tv 2- We know that the solution to a linear, homogeneous first order equation is given by some exponential with the appropriate exponent. For instance, the solution to y ′ (t) = 5y(t) y(t) = c1 e5t . Notice that, once we know how to compute the solution to the homogeneous case, we can use it to construct the integrating factor and, consequently, the solution to the nonhomogeneous case y ′ (t) = 5y(t) + f (t). If we try to look for a solution to (3.2) with the form y(t) = ert , we find ert ar 2 + br + c = 0. As the exponential is not zero (at least, for bounded time t), we get ar 2 + br + c = 0. 41 Chapter 3. Second order ODE This is called [*** characteristic equation ***]. Let’s assume for the moment that we can find two solutions: √ −b ± b2 − 4ac r= . 2a Now notice that if y1 (t) and y2 (t) are solutions to (3.2), then the same applies to y(t) = c1 y1 (t) + c2 y2 (t). This is called Principle of superposition. We can easily check this fact. Let’s insert y(t) into the equation: nc hó where we have used that y1 and y2 are two solutions to (3.2). n a(c1 y1 (t) + c2 y2 (t))′′ + b(c1 y1 (t) + c2 y2 (t))′ + c(c1 y1 (t) + c2 y2 (t)) = c1 ay1′′ (t) + by1′ (t) + cy1 (t) + c2 ay2′′ (t) + by2′ (t) + cy2 (t) = 0, er oBe li Consequently, the general solution to (3.2) is given by y(t) = c1 er1 t + c2 er2 t , r1 = −b + √ −b − b2 − 4ac b2 − 4ac , r2 = . 2a 2a √ ra n where DR given AF Tv 2- R. G Notice that, as there are two arbitrary constants c1 and c2 , we require to initial data to determine a unique solution. Example 3.1. Find the solution to √ √ y ′′ − ( 5 + 2)y ′ + 2 5y = 0 y(0) = 0, y ′ (0) = 2. Solution: We insert our ansatz y(t) = ert , and we get √ √ r 2 − ( 5 + 2)x + 2 5 = 0. This polynomial has solutions r= √ 5 and r = 2. Consequently, the general solution is √ y(t) = c1 e2t + c2 e 5t . Now we insert our initial data. We get √ y(0) = c1 + c2 = 0, y ′ (0) = c1 2 + c2 5 = 1. 42 3.1. The wronskian We have to solve this system of equation in the unknown c1 and c2 . We obtain c1 = −c2 = 1 √ . 2− 5 We conclude that the solution is y(t) = √ 1 1 √ e2t − √ e 5t . 2− 5 2− 5 Example 3.2. Find the general solution to y ′′ (t) − 5y(t) = 0. √ 5t + c2 e− √ 5t nc hó . ra n y(t) = c1 e er oBe li Consequently, the general solution is n Solution: The characteristic equation is now √ √ (r − 5)(r + 5) = 0. R. G The main result for linear second order ODE is Theorem 3.1. Consider the initial value problem (3.1) with initial data (3.3) AF Tv 2- y(t0 ) = y0 , y ′ (t0 ) = y0′ . DR Let f (t), g(t), h(t) be continuous on an open interval I that contains the point t0 . Then there is exactly one solution y(t) of this problem, and the solution exists throughout the interval I. Notice that the principle of superposition applies also even if the coefficients are not constants as long as f (t) ≡ 0. Let’s assume that y1 (t) and y2 (t) are two solutions. Then we have (c1 y1 (t) + c2 y2 (t))′′ + g(t)(c1 y1 (t) + c2 y2 (t))′ + h(t)(c1 y1 (t) + c2 y2 (t)) = c1 y1′′ (t) + g(t)y1′ (t) + h(t)y1 (t) + c2 y2′′ (t) + g(t)y2′ (t) + h(t)y2 (t) = 0, 3.1 The wronskian So far we have a way of finding general solutions to a kind of second order ODE. The main question that arises now is whether the constants in the general solution can be chosen so that the initial data also hold. In order the general solution with form y(t) = c1 y1 (t) + c2 y2 (t) 43 Chapter 3. Second order ODE verifies y(t0 ) = y0 , y ′ (t0 ) = y0′ , we need that ci verify the system c1 y1 (t0 ) + c2 y2 (t0 ) = y0 c1 y1′ (t0 ) + c2 y2′ (t0 ) = y0′ The determinant of this system is W (t0 ) = y2′ (t0 )y1 (t0 ) − y1′ (t0 )y2 (t0 ). n [*** This determinant is called Wronskian ***] We know that, as long as W 6= 0, there exists a unique solution to this system. So, there are a unique pair c1 , c2 and, consequently, a unique y(t). nc hó More generally, for a pair of solutions to (3.2), we can define the Wronskian at time t ≥ t0 er oBe li W (t) = y2′ (t)y1 (t) − y1′ (t)y2 (t). [*** Let me summarize the Section 3.2 so far: R. G ra n 1. As long as f (t), g(t), h(t) are continuous, there exists a unique solution to (3.1) with initial data y(t0 ) = y0 , y ′ (t0 ) = y0′ . 2- 2. We know how to construct (particular) solutions y1 (t), y2 (t), to (3.2) by solving the characteristic equation. AF Tv 3. By the superposition principle, y(t) = c1 y1 (t) + c2 y2 (t) is a solution. DR 4. The constants ci can be chosen so that y(t) = c1 y1 (t) + c2 y2 (t) solves (3.2) with the initial data (3.3) if and only if the wronskian is not zero at t0 . This is the translation of Theorem 3.2.3. 5. Furthermore, every solution to (3.2) has the form y(t) = c1 y1 (t) + c2 y2 (t) if and only if the wronskian is not zero at t0 . This is the translation of Theorem 3.2.4. ***] The pair y1 , y2 is called fundamental set of solutions if and only if the wronskian is not zero. The solution y(t) = c1 y1 (t) + c2 y2 (t) is called general solution as long as the wronskian is not zero. We can summarize a little bit more: [*** To find the general solution for a given second order ODE we only have to find two solutions y1 (t), y2 (t) with a nonzero wronskian. Then y(t) = c1 y1 (t) + c2 y2 (t) is the general solution and the constants ci can be chosen so that (3.3) also hold. You can also understand that the non-zero wronskian condition ensures that the ansatz y(t) = ert is valid. ***] Example 3.3. Find the general solution to y ′′ (t) = y(t). 44 3.1. The wronskian Solution: We look for solution with the form y(t) = ert and we obtain the characteristic equation r 2 = 1, so r = ±1. We obtain y(t) = c1 et + c2 e−t . We know that this is a bona fide general solution because the wronskian is y1 (t) y2 (t) = −1 − 1 = −2 6= 0. W (t) = ′ y1 (t) y2′ (t) However, notice that the functions et + e−t et − e−t ỹ2 = sinh(t) = 2 2 are also solutions of the ODE. Consequently, we could have chosen ỹ1 , ỹ2 as our fundamental set of solutions because in this case, the wronskian is ỹ1 (t) ỹ2 (t) = cosh2 (t) − sinh2 (t) = 1 6= 0. W (t) = ′ ỹ1 (t) ỹ2′ (t) er oBe li nc hó n ỹ1 = cosh(t) = Now observe that with our latter choice, our general solution would be ra n ỹ(t) = c̃1 cosh(t) + c̃2 sinh(t). 2- R. G The question is [*** are we getting the same answer or a different one??? ***] As you can imagine, the answer is ’the same’. We compute y1 (t) + y2 (t) y1 (t) − y2 (t) ỹ(t) = c̃1 + c̃2 = 0.5 (c̃1 + c̃2 ) y1 (t) + 0.5 (c̃1 − c̃2 ) y2 (t). 2 2 AF Tv Consequently, it is enough if we take c1 = 0.5 (c̃1 + c̃2 ) , c2 = 0.5 (c̃1 − c̃2 ) , DR to recover the same set of solutions. Let’s go back to our motivation with a plane in the 3D space. Let’s consider the plane x + y + z = 0. We know that this plane is perpendicular to the vector (1, 1, 1). Consequently, it can be described as the points p ∈ R3 such that p = c1 (1, −1, 0) + c2 (1, 0, −1), for some constants c1 , c2 ∈ R. You can convince yourself that the situation with a homogeneous second order ODE is similar somehow. Now notice that the same plane can be described as the points p̃ ∈ R3 such that p = c̃1 (1, −0.5, −0.5) + c̃2 (−0.5, 1, −0.5), for some constants c̃1 , c̃2 ∈ R. This is exactly the same situation as in the previous example for the second order ODE. To become familiar, let’s compute some examples involving the wronskian: 45 Chapter 3. Second order ODE Example 3.4. Compute the wronskian corresponding to y ′′ (t) − 5y(t) = 0. Solution: We know from a previous example that the general solution is √ y(t) = c1 e 5t + c2 e− √ 5t . Then the particular solutions are by √ y1 (t) = e 5t , y2 (t) = e− √ 5t . The wronskian is then y (t) y2 (t) W (t) = 1′ y1 (t) y2′ (t) √ √ = e0t − 5 − 5 6= 0 Solution: y (t) y2 (t) W (t) = 1′ y1 (t) y2′ (t) er oBe li y1 (t) = er1 t , y2 (t) = er2 t . nc hó n We obtain that y1 , y2 form a fundamental set of solutions. Example 3.5. Compute the wronskian corresponding to the particular solutions = e(r1 +r2 )t (r2 − r1 ) . ra n Consequently, as long as r2 − r1 6= 0, y1 and y2 form a fundamental set of solutions. AF Tv 2- R. G A legitimate question (and an important topic) is whether this Wronskian determinant may vanish in some region (being non-zero in other regions). Theorem 3.2. Let y1 and y2 be two solutions of (3.1) with f (t) ≡ 0, then the wronskian verifies Rt W (t) = W (0)e− 0 g(s)ds . Proof. We have DR In particular, the wronskian is either identically zero or always different to zero. W (t) = y1 y2′ − y2 y1′ , W ′ (t) = y1′ y2′ + y1 y2′′ − y2′ y1′ − y2 y1′′ = y1 y2′′ − y2 y1′′ . Now we use the equation (3.2): W ′ (t) = y1 (−g(t)y2′ − h(t)y2 ) − y2 (−g(t)y1′ − h(t)y1 ), W ′ (t) = −g(t) y1 y2′ − y2 y1′ = −g(t)W. Consequently, the wronskian verifies a first order ODE. We can solve it so d ln(W (t)) = −g(t), dt Z t W (t) ln =− g(s)ds, W (0) 0 W (t) = W (0)e− 46 Rt 0 g(s)ds . 3.2. Complex roots 3.2 Complex roots We start with the following identity eiθ = cos(θ) + i sin(θ). This is called Euler-deMoivre formula. To motivate this formula, we can use Taylor’s Theorem. Once this formula is true, we obtain cos(θ) = eiθ + e−θi 2 eiθ − e−θi . 2i Now the resemblance with its hyperbolic counterparts cosh and sinh seems more clear. In particular, we obtain that e(a+ib)t er oBe li nc hó n sin(θ) = is an spiral. In this spiral the term eat gives us the distance towards zero while the term eibt is the rotation part. AF Tv 2- R. G ra n Assume now that you have y(t) = e(a+ib)t satisfying the ODE (3.2). Then by the superposition principle we have that both the real and the imaginary part satisfy the equation (3.2). Example 3.6. Find the general solution to y ′′ = −y. so DR Solution: We know that y1 = cos(t), y2 = sin(t) form a fundamental set of solutions. Let’s look for a solution with the form ert . Then the characteristic equation is r 2 = −1, r = ±i. Using Euler’s formula, we obtain ỹ1 = eit = cos(t) + i sin(t), ỹ2 = e−it = cos(t) − i sin(t). Using the superposition principle, we obtain that y1 = ỹ1 + ỹ2 = cos(t) and y2 = (ỹ1 − ỹ2 )/2i = sin(t) are solutions. Furthermore, both expressions are purely real functions. This will be helpful because generally we are interested problems where only real quantities are involved. 47 Chapter 3. Second order ODE Example 3.7. Compute the wronskian corresponding to y1 (t) = Ree(λ+iµ)t , y2 (t) = Ime(λ+iµ)t . Solution: We have that e(λ+iµ)t = eλt (cos(µt) + i sin(µt)), so Ree(λ+iµ)t = eλt cos(µt), Ime(λ+iµ)t = eλt sin(µt). n W (t) = µe2λt . , er oBe li so eλt cos(µt) eλt sin(µt) = λeλt cos(µt) − µeλt sin(µt) λeλt sin(µt) + µeλt cos(µt) nc hó The wronskian is then y (t) y2 (t) W (t) = 1′ y1 (t) y2′ (t) G ra n [*** Notice that the previous example shows that once we get an imaginary number, both real and imaginary parts from a set of fundamental solutions. ***] Example 3.8. Find the general solution to R. y ′′ + 5y = 0. AF Tv 2- Solution: We insert the ansatz y(t) = ert to look for a solution. We find the characteristic equation √ √ r 2 + 5 = 0, r1 = 5i, r2 = − 5i. DR Consequently, we can take as fundamental set of solutions the pair We compute √ y1 (t) = Ree 5it √ , y2 (t) = Ime 5it . √ √ y1 (t) = cos( 5t), y2 (t) = sin( 5t). [*** Maybe you want to solve exercises 1-22, section 3.3 ***] 3.3 Repeated roots Some second order polynomial equation may have a repeated root. For instance (r − 1)2 = 0 has solution r1 = 1 = r2 . Consequently, we have the following example 48 3.4. Examples Example 3.9. Find the general solution of y ′′ − 2y ′ + y = 0. Solution: When we try to find a fundamental set of solutions we end with y1 = et = y2 [*** But then the wronskian is identically zero!! ***] We have to find a different candidate for y2 . The idea is then to look for something like y2 (t) = v(t)y1 (t), where v(t) is unknown (so, the problem is reduced to find this v(t)). We insert y2 (t) = v(t)y1 (t) in the equation and we get nc hó n (v ′′ et + 2v ′ et + vet ) − 2(v ′ et + vet ) + (vet ) = 0. Simplifying, we get er oBe li v ′′ + 2v ′ + v − 2(v ′ + v) + v = v ′′ = 0. Consequently v(t) = c̃1 t + c̃2 . ra n In this way, we find our candidate R. G y2 = (c̃1 t + c̃2 )et . AF Tv 2- As c̃2 et can be written in terms of y1 , we consider c̃2 = 0. (in other words, we assume that this term is absorbed by y1 ). Equivalently, we can think that we don’t need the fully generality of the expression v(t) = c̃1 t + c̃2 , DR but a non-constant solution to v ′′ = 0, so we can take v(t) = t. Let’s compute the wronskian y (t) y2 (t) W (t) = 1′ y1 (t) y2′ (t) 3.4 t e et t = et et + tet = (1 + t)e2t − te2t 6= 0, Examples In the previous sections we have seen how to solve a general second order, homogeneous ODE with constant coefficients. In this section we are going to apply the previous tools to several ODEs. Example 3.10. Find the general solution of y ′′ + 2y ′ + 2 = 0 49 Chapter 3. Second order ODE Solution: We find the characteristic equation r 2 + 2r + 2 = 0. This equation has not real solutions. One of the pair of complex solutions is √ −2 + 4 − 4 ∗ 2 r= = −1 + i. 2 Consequently, a (complex valued) solution is ỹ = e(−1+i)t = e−t (cos(t) + i sin(t)) Recalling that the real and imaginary parts of ỹ form a set of fundamental solutions we write the general solution as nc hó n y(t) = c1 Ree(−1+i)t + c2 Ime(−1+i)t = c1 e−t cos(t) + c2 e−t sin(t). Example 3.11. Find the general solution of er oBe li y ′′ + 2y ′ + 1 = 0 Solution: The characteristic equation is ra n r 2 + 2r + 1 = 0. We find a repeated solution G r = −1. R. Consequently, our set of fundamental solution is given by AF Tv 2- y1 (t) = e−t , y2 (t) = ty1 (t). The general solution is then DR y(t) = c1 e−t + c2 te−t . Example 3.12. Find the general solution of y ′′ + 2y ′ − 1 = 0 Solution: The characteristic equation is r 2 + 2r − 1 = 0. We find the solutions r1 = √ Consequently, our general solution is 2 − 1, r2 = −1 − √ y(t) = c1 er1 t + c2 er2 t = c1 e( 2−1)t √ 2. + c2 e(−1− √ 2)t . [*** Summarizing, if we write r1 , r2 the solutions to the characteristic equation then 50 3.5. Nonhomogeneous equations 1. if r1 6= r2 y(t) = c1 er1 t + c2 er2 t , 2. if r1 = r2 y(t) = c1 er1 t + c2 ter1 t , 3. if r1 , r2 are complex (so r1 = r¯1 ) y(t) = c1 Re er1 t + c2 Im er1 t . ***] Let’s write er oBe li L[y] = y ′′ + p(t)y ′ + q(t)y. n Nonhomogeneous equations nc hó 3.5 We consider now the ODE L[Y ] = f (t), ra n and its homogeneous counterpart (3.5) G L[y] = 0. (3.4) AF Tv 2- R. Let’s write Y1 (t) for a particular solution to (3.4) and y(t) = c1 y1 (t) + c2 y2 (t) for the general solution of (3.5). As the equation (3.4) is linear, we can apply the superposition principle to show that Y2 (t) = Y1 (t) + y(t) DR is a solution to (3.4). Let’s compute Y2′′ = Y1′′ + c1 y1′′ + c2 y2′′ , Y2′ = Y1′ + c1 y1′ + c2 y2′ . We insert these expressions into the equation (3.4): L[Y2 ] = Y1′′ + c1 y1′′ + c2 y2′′ + p(t)(Y1′ + c1 y1′ + c2 y2′ ) + q(t)(Y1 + c1 y1 + c2 y2 ), L[Y2 ] = Y1′′ + p(t)Y1′ + q(t)Y1 + c1 (y1′′ + p(t)y1′ + q(t)y1 ) + c2 (y2′′ + p(t)y2′ + q(t)y2 ). L[Y2 ] = L[Y1 ] + c1 L[y1 ] + c2 L[y2 ]. Now we use that Y1 , y1 , y2 are solutions to L[Y1 ] = f (t), L[y1 ] = 0, L[y2 ] = 0, to obtain L[Y2 ] = f (t). 51 Chapter 3. Second order ODE Consequently, [*** the general solution, Y (t), to (3.4) can be written as the sum of the general solution to (3.5), y(t), and a particular solution to (3.4), yp (t), Y (t) = yp (t) + y(t). ***] In this way, once we are given the problem (3.4), we can, instead finding TWO solutions with non-zero wronskian for (3.4), just solve (3.5) and then look for ONE particular solution. Solution: We look for something of the form er oBe li y ′′ + y ′ + y = et . yp (t) = Cet . ra n Inserting this ansatz into the equation, we find R. G C + C + C = 1. AF Tv 2- So, 1 yp (t) = et . 3 DR Example 3.14. Find a particular solution to y ′′ + y ′ + y = e2t . Solution: Again, we look for something of the form yp (t) = Ce2t . Inserting this ansatz into the equation, we find 4C + 2C + C = 1. So, 1 yp (t) = e2t . 7 Example 3.15. Find a particular solution to y ′′ + y ′ + y = sin(t). 52 nc hó [*** You should read the examples in section 3.5. ***] Example 3.13. Find a particular solution to n To find a particular solution, sometimes we are going to use the method of underdetermined coefficients. This method relies on the appropriate hypothesis on the form of the particular solution. Of course, the hypothesis is going to be based on the form of the function on the right hand side, f (t). 3.5. Nonhomogeneous equations Solution: We look for something of the form yp (t) = A sin(t) + B cos(t). [*** Notice that in general it is NOT enough to consider simply sines or cosines and we need both. ***] We have yp′′ = −yp , yp′ = A cos(t) − B sin(t) Inserting this ansatz into the equation, we find −yp + A cos(t) − B sin(t) + yp = sin(t). So, er oBe li yp (t) = − cos(t). nc hó We find that the particular solution is n B = −1. Example 3.16. Find the general solution, Y (t), to G ra n y ′′ + y ′ + y = et . R. Solution: We already know the particular solution AF Tv 2- 1 yp (t) = et . 3 We have to find the general solution to the problem DR y ′′ + y ′ + y = 0. We obtain the characteristic equation r 2 + r + 1 = 0. The equation has solutions √ −1 ± −3 r= . 2 As the roots are complex, we find √ √ y(t) = c1 e−0.5t cos(t 3/2) + c2 e−0.5t sin(t 3/2). The general solution to the nonhomogeneous problem is √ √ 1 Y (t) = et + c1 e−0.5t cos(t 3/2) + c2 e−0.5t sin(t 3/2). 3 53 Chapter 3. Second order ODE Example 3.17. Find the general solution, Y (t), to y ′′ + y ′ + y = e2t . Solution: We already know the particular solution 1 yp (t) = e2t . 7 We know √ √ y(t) = c1 e−0.5t cos(t 3/2) + c2 e−0.5t sin(t 3/2). The general solution to the nonhomogeneous problem is Example 3.18. Find the general solution, Y (t), to er oBe li y ′′ + y ′ + y = sin(t). nc hó n √ √ 1 Y (t) = e2t + c1 e−0.5t cos(t 3/2) + c2 e−0.5t sin(t 3/2). 7 Solution: We already know the particular solution √ √ y(t) = c1 e−0.5t cos(t 3/2) + c2 e−0.5t sin(t 3/2). R. G We know ra n yp (t) = − cos(t). AF Tv 2- The general solution to the nonhomogeneous problem is √ √ Y (t) = − cos(t) + c1 e−0.5t cos(t 3/2) + c2 e−0.5t sin(t 3/2). DR As we have seen, the method of undetermined coefficients is very sensitive to the form of the function f (t) that we have in (3.4). Now we are going to explain a second method that we can use to find a particular solution of (3.4). This method is called variation of parameters. This is a more general method, that, a priori, can be applied to every given ODE. The starting point is the general solution y(t) = c1 y1 (t) + c2 y2 (t) to the homogeneous equation (3.5). The we are going to change the coefficients from constants c1 , c2 to functions u(t), v(t) to form yp (t) = u1 (t)y1 (t) + u2 (t)y2 (t). Now we are going to insert this expression into the equation (3.4) to find the appropriate conditions that u1 (t), u2 (t) should verify in order yp (t) is a particular solution to (3.5). Let’s show how this method works with an example. 54 3.5. Nonhomogeneous equations Example 3.19 (Exercise 3.6.5). Find the general solution to y ′′ + y = tan(t). Solution: The general solution for the homogeneous problem is y(t) = c1 sin(t) + c2 cos(t). [*** Why? ***] We introduce yp (t) = u1 (t) sin(t) + u2 (t) cos(t) into the equation. We have n yp′ (t) = u1 (t) cos(t) − u2 (t) sin(t) + u′1 (t) sin(t) + u′2 (t) cos(t). nc hó We [*** force ***] the equality er oBe li u′1 (t) sin(t) + u′2 (t) cos(t) = 0. There are several reasons to do this, but the main point is that, if we compute the second derivative of yp , there are not terms with u′′i . Now ra n yp′′ (t) = u′1 (t) cos(t) − u′2 (t) sin(t) − yp (t). G Inserting both terms in the equation we get R. u′1 (t) cos(t) − u′2 (t) sin(t) = tan(t). AF Tv 2- Consequently, we have to solve a system of algebraic equations for u′i . We have u′1 (t) = −u′2 (t) cos(t) , sin(t) DR and, using this information into the other equation, u′2 (t) = − tan(t) sin(t). We conclude u′1 (t) = sin(t). In order to find the functions u1 , u2 , we only have to integrate. We find u1 (t) = − cos(t), and 1 1 − cos(t)2 =− + cos(t), cos(t) cos(t) 1 1 + sin(t) u2 (t) = − ln + sin(t). 2 1 − sin(t) u′2 (t) = − tan(t) sin(t) = − so 55 Chapter 3. Second order ODE 3.6 Resonance Let’s consider the spring-mass system given by my ′′ + γy ′ + ky = 0, where m is the mass, γ is the friction coefficient and k is the force due to the spring. All these parameters are non-negative. If γ > 0, the roots of the characteristic equation have negative real part, so, lim y(t) = 0. t→∞ In the case γ = 0, the amplitude of the wave remains constant. er oBe li nc hó n Let’s focus on the effect of the friction γy ′ . We consider the case where γ 2 /4km ≈ 0. In this case the solution of the ode can be expressed as ! !! p p 2 − 4km 2 − 4km γ γ y(t) = e−γ/2mt c1 sin t + c2 cos t . 2m 2m ra n As γ << 1, the solution tends very slowly (and oscillating in the process) to the equilibrium state y = 0. In the case where γ 2 /4km = 1, the solution is R. G y(t) = e−γ/2mt (c1 t + c2 ) . AF Tv 2- This case is called critically damped. Here the solution y(t) tends to the equilibrium state without oscillation. The case γ 2 /4km > 1 is called overdamped and the behaviour is similar. DR In the case where there is a forcing term my ′′ + γy ′ + ky = F (t), the solution can be written as y(t) = yh (t) + yp (t). As we have seen, if γ > 0, yh (t) → 0 and y(t) → yp (t). The particular solution is called then steady solution. Let’s consider F (t) = F0 cos(ωt). Then, the particular solution can be written as a sum of sin and cos and can be arranged as yp (t) = R cos(ωt − δ), 56 3.7. The Laplace transform with ω0 = k ,∆= m q m2 (ω 2 − ω02 )2 + γ 2 ω 2 , γω F0 , sin(δ) = . ∆ ∆ The question that we want to understand is the relationship between R(ω) and ω. To compute the maximum of R(ω) as a function of ω, we take the derivative and solve R′ (ω) = 0. We find that the maximum point occurs at γ2 2 , ωmax = ω0 1 − 2mk R(ω) = γ2 1+ 8mk . Now notice that if 0 < γ << 1, ωmax is real and the value F0 >> 1. γω0 er oBe li Rmax ≈ n Rmax F0 ≈ γω0 nc hó with a value Consequently, if ω ≈ ω0 and 0 < γ << 1, we have ωmax ≈ ω0 ≈ ω and G R. This phenomenon is called resonance. ra n R(ω) ≈ R(ωmax ) = Rmax >> 1. 2- The Laplace transform AF Tv 3.7 DR Memento 3.1. Recall this definition Z Z ∞ f (x)dx = lim A→∞ a a A f (x)dx. If the limit exists and is finite, we say that the improper integral converges. Otherwise the integral is said to diverge (the limit is not finite) or to fail to exist (the limit does not exist). Equipped with the notion of improper integral we define the integral transform Z b K(s, t)y(s)ds, Y = a where a, b are constants (maybe ±∞) and K is a given function called kernel. [*** One expect that to solve a given ode, one needs to compute certain integrals. Or, in other words, if L(y) = y ′′ + by ′ + cy, the inverse operator L−1 is given by several integrals. This is the idea behind introducing the integral transform. ***] 57 Chapter 3. Second order ODE Definition 3.2. Given y(t) we define its Laplace transform as L(y) = Y (s) = ∞ Z e−st y(t)dt, 0 whenever the improper integral is defined. This improper integral may not be finite and we have the following result Lemma 3.1. Let f (t) be piecewise continuous and such that |f (t)| ≤ Aeat for t ≥ M. Then the Laplace transform L(f ) exists for s > a. n The proof of this statement relies in the fact that So Z ∞ −st f (t)e M dt ≤ Z er oBe li nc hó f (t)e−st ≤ |f (t)|e−st ≤ Ae(a−s)t for t ≥ M. ∞ Ae(a−s)t dt. M Example 3.20. Compute G ra n L(et ). −st t e 0 e dt = Z A (1−s)t e 2- A 0 AF Tv Z R. Solution: We have DR Taking the limit, we have e(1−s)t A e(1−s)A 1 dt = − . = 1−s 0 1−s 1−s L(et ) = − 1 for s > 1. 1−s Notice that if s ≤ 1 the integral is not finite. Example 3.21. Compute L(sin(t)). Solution: After two integration by parts, we have L(sin(t)) = lim Z A→∞ 0 A e−st sin(t)dt = 1 − s2 L(sin(t)). Rearranging, L(sin(t)) = s1 1 , for s > 0. +1 Example 3.22. Write L(c1 y1 (t) + c2 y2 (t)) in terms of L(y1 (t)) and L(y2 (t)). 58 3.7. The Laplace transform Solution: We have L(c1 y1 (t)+c2 y2 (t)) = Z ∞ Z e−st (c1 y1 (t)+c2 y2 (t))dt = c1 0 ∞ e−st y1 (t)dt+c2 Z ∞ e−st y2 (t)dt. 0 0 Consequently, L(c1 y1 (t) + c2 y2 (t)) = c1 L(y1 (t)) + c2 L(y2 (t)). Now we are going to show how to use the Laplace transform to compute the solution of a given second order ode. The key point is Lemma 3.2. L(y ′ (t)) = sL(y(t)) − y(0), nc hó Proof. We have L(y (t)) = lim Z A→∞ 0 A ′ −st y (t)e A −st er oBe li ′ n L(y ′′ (t)) = s2 L(y(t)) − sy(0) − y ′ (0). dt = lim y(t)e A→∞ +s 0 Z A y(t)e−st dt, 0 ra n and we conclude the first part of the result. For the second one, we notice that R. G L(y ′′ (t)) = sL(y ′ (t)) − y ′ (0). 2- [*** There are analogous formulas for the case with n derivatives. ***] DR AF Tv [*** Now we can, by taking the Laplace transform, changing from a differential equation at the level of y to a algebraic equation at the level of L(y). ***] Example 3.23. Solve y ′′ + 2y ′ + y = 0, y(0) = y0 , y ′ (0) = y0′ . Solution: We have L(y ′′ + 2y ′ + y) = L(y ′′ ) + 2L(y ′ ) + L(y) = 0. Now L(y ′′ ) = s2 L(y) − sy0 − y0′ 2L(y ′ ) = 2sL(y) − 2y0 . The LHS reduces to Consequently, s2 + 2s + 1 L(y) − 2y0 − y0′ − sy0 = 0. L(y) = 2y0 + y0′ + sy0 . s2 + 2s + 1 59 Chapter 3. Second order ODE Now the problem has been reduced to find functions with the desired Laplace transform. [*** Notice that the same method works with non-homogeneous equations. ***] Given Y (s), to find a function y(t) such that L(y) = Y (s) is called to invert the transform. DR AF Tv 2- R. G ra n er oBe li nc hó n [*** Please, read carefully the examples in section 6.2. ***] 60 Chapter 4 Review of matrices We begin recalling some basic stuff from 22A: R. G ra n Definitions Let’s consider two matrices  a11 a12  a21 a22  A= .  .. er oBe li 4.1 nc hó n Systems of ODE DR AF Tv 2- an1 an2    B=   . . . a1m . . . a2m   ,  . . . anm b11 b12 . . . b1l b21 b22 . . . b2l .. . bk1 bk2 . . . bkl and a real number λ ∈ R. We also write D= 1 2 4 5 , E= 2 4 3 6 ,   9 8 7 F = 6 5 4  3 2 1   1  G= 2  1 61      Chapter 4. Systems of ODE   1 2 3 H= 1 2 4  1 1 1 Summing two matrices [*** Remember, to add two matrices, you need that both matrices have the same dimensions. ***] You need that because you are going to define the sum of the two matrices by the matrix form by the componentwise sum. In other words, you will add every component on A with the analogous component on B and save that outcome in the new matrix. nc hó n With the notation for A and B and where k = n, l = m we have  a11 + b11 a12 + b12 . . . a1m + b1m  a11 + b21 a22 + b22 . . . a2m + b2m  C =A+B = .  .. an1 + bk1 an2 + bn2 . . . anm + bnm     er oBe li For instance, considering D and E, we compute 1+2 2+4 M =D+E = , 4+3 5+6  DR For instance, AF Tv 2- R. G ra n Multiplication by a number As before, we define this multiplication componentwise. With the notation for A and B and where k = n, l = m we have   λa11 λa12 . . . λa1m  λa21 λa22 . . . λa2m    C = λA =  . ,  ..  λan1 λan2 . . . λanm   18 16 14 2F =  12 10 8  . 6 4 2 Transpose of a matrix We denote At the transpose  a11 a21 . . . an1  a12 a22 . . . an2  At =  .  .. a1m a2m . . . anm of a matrix. Then we have    .  Notice that if A is a n × m matrix, its transpose At has dimensions m × n. For instance   963 Ft =  8 5 2  741 Gt = (1 2 1) 62 4.1. Review of matrices Matrix multiplication [*** Notice that to multiply two matrices we need a compatibility condition. In particular, if the left matrix has dimensions n × m the right matrix should have dimensions m × l. ***] Then we have a matrix C = [cij ] = AB where cij = X ail blj . l 24 36 12 45 = 2 + 16 4 + 20 3 + 24 6 + 30 18 24 27 36 nc hó ED = n For instance, we have 12 24 2 + 6 4 + 12 8 16 DE = = = 45 36 8 + 15 16 + 30 23 46 = er oBe li Thus,[*** the product of two matrices does not commute!! ***]  G ra n       987 1 9 + 16 + 7 32 F G =  6 5 4   2  =  6 + 10 + 4  =  20  321 1 1+4+1 6 We will restrict to the 2 × 2 case 2- R. Determinant and inverse matrix: the 2x2 case for now. We define the determinant AF Tv det(A) = a11 a22 − a12 a21 , Let’s check that: −1 det(A)AA = DR and the inverse −1 A 1 = det(A) a22 −a12 −a21 a11 . a11 a12 a22 −a12 a21 a22 −a21 a11 a22 aa11 − a12 a21 −a11 a12 + a11 a12 = a22 a21 − a22 a21 a22 aa11 − a12 a21 det(A) 0 = 0 det(A) Let’s see an example: det(D) = 5 − 8 = −3, 1 5 −2 −1 D = . −3 −4 1 63 Chapter 4. Systems of ODE Determinant and inverse matrix: the 3x3 case We define the determinant det(A) = a11 a22 a33 + a12 a23 a31 + a21 a32 a13 − a13 a22 a13 − a12 a21 a33 − a23 a32 a11 . In particular, det(F ) = 0. To compute the inverse matrix, we are going to use Gaussian elimination. Let’s show how to do this by computing H −1 . First, we write the extended matrix   1 2 3 1 0 0 H|I =  1 2 4 0 1 0  . 1 1 1 0 0 1 er oBe li nc hó n The goal is to obtain the identity matrix on the left part by elementary manipulations of the rows, I|H −1 . We compute   1 2 3 1 0 0  0 0 1 −1 1 0  , 0 −1 −2 −1 0 1  DR AF Tv 2- R. G ra n  1 2 3 1 0 0  0 −1 −2 −1 0 1  , 0 0 1 −1 1 0   1 2 3 1 0 0  0 1 2 1 0 −1  , 0 0 1 −1 1 0   1 0 −1 −1 0 2  0 1 2 1 0 −1  , 0 0 1 −1 1 0   2 1 0 −1 −1 0  0 1 0 3 −2 −1  , 0 0 0 1 −1 1   1 0 0 −2 1 2  0 1 0 3 −2 −1  = I|H −1 . 0 0 1 −1 1 0 Let’s check:   −2 1 2 1 2 3 H −1 H =  3 −2 −1   1 2 4  . 1 1 1 −1 1 0  Given a matrix A we want to solve the eigenvalues/eigenvectors problem, i.e. we want to solve Ax = λx. In this problem, we have to find the real number λ and the vector x. 64 4.1. Review of matrices Example 4.1. Given A= find the eigenvalues and eigenvectors. 1 2 2 2 , Solution: To find the eigenvalues, we have to find the λ such that det(A − λId) = (1 − λ)(2 − λ) − 4 = 0. This second order equation is 2 − λ − 2λ + λ2 − 4 = λ2 − 3λ − 2 = 0, and has solutions √ (−3)2 + 8 3 + 17 λ1 = = , 2 2 p √ 3 − (−3)2 + 8 3 − 17 = . λ2 = 2 2 Now we have to solve the systems n p er oBe li nc hó 3+ (A − λ1 Id)x1 = 0, (A − λ2 Id)x2 = 0. 2− 3+ √ 17 ! x2 = 0. 17 ! x2 = 0. 2 G ra n The first system in equations form is √ ! 3 + 17 x1 + 2x2 = 0, 2x1 + 1− 2 AF Tv 2- R. To solve this system notice that the system reduces to √ ! 17 x1 = x2 . 0.25 + 4 DR Consequently, the set of solutions can be written as ( ! ) √ ! 17 0.25 + α, α , α ∈ R . 4 The second system is 1− 3− √ 2 17 ! x1 + 2x2 = 0, 2x1 + 2− 3− √ 2 We solve this system by noticing that √ ! 17 x1 = − 0.25 + x2 . 4 Then, the set of solutions can be written as ( ! ) √ ! 17 α, α , α ∈ R . − 0.25 + 4 65 Chapter 4. Systems of ODE 4.2 System of ODEs In the course we have been studying how to solve a (single) ordinary differential equation of first or second order. We want to understand systems of odes and also (single) odes with order higher than 2. In our advantage, we have that these two problems are closely related. Notice that, for instance, the ode y ′′′ + y ′′ + y ′ + y = f (t), can be written using new variables y ′ = u, y ′′ = u′ = v, y ′′′ = v ′ as nc hó     y′ 0 1 0 y  u′  =  0 0 1  u . v′ −1 −1 −1 v n  er oBe li Of course, a general ODE of n−th order can be written as a system of first order ODEs with n unknowns in the same way. With this similarity in mind, we are going to focus our attention in the systems of ODEs. ***] G AF Tv 2-   W (t) = det   x11 (t) x21 (t) ... xn1 (t) x12 (t) x22 (t) ... xn2 (t) .. .. .. .. . . . . x1n (t) x2n (t) ... xnn (t) R.  ra n [*** Given a homogeneous system with n unknowns, we have to find n solutions ~xi , i = 1, 2...n, such that its wronskian     6= 0.  DR Of course, this seems analogous (as it is), to the second order ODE. Recall that for a given second order ODE, we have to look for two solutions with a non-zero wronskian. Furthermore, we proved that if the wronskian is not zero initially, then it does not vanish later. [*** This is also true for the wronskian of a system. ***] Let’s consider the system ~x′ = A~x. We are going to look for a solution of the form ~ rt . ~x = ξe Inserting this ansatz into the equation, we have ~ rt = Aξe ~ rt , r ξe so ~ rt = 0. (A − rId)ξe 66 4.2. System of ODEs We conclude that r is an eigenvalue and ξ~ is an eigenvector. Consequently, if we have n eigenvectors with n corresponding eigenvalues (that, for the moment, we assume to be different and real), the general solution is ~x(t) = c1 ξ~1 er1 t + c2 ξ~2 er2 t + ... + cn ξ~n ern t [*** Again, this situation is analogous to the second order ODE. For a second order ODE we look for solutions with exponential form and non-zero wronskian to find the general solution. ***] Example 4.2. Find the general solution to ~x′ = A~x, 1 2 2 2 . nc hó A= n where √ (−3)2 + 8 3 + 17 = , 2 2 ! √ 17 ,1 . 0.25 + 4 ra n p R. G λ1 = 3+ er oBe li Solution: Using a previous example, we know that the eigenvalues/eigenvectors are √ (−3)2 + 8 3 − 17 λ2 = = , 2 2 √ ! ! 17 − 0.25 + ,1 . 4 2- and DR AF Tv 3− p Consequently, the general solution is ! √ √ 3+ 17 17 ~x = c1 0.25 + , 1 e 2 t + c2 4 − 0.25 + √ 17 4 Example 4.3. Compute the wronskian corresponding to ! √ √ 3+ 17 17 ,1 e 2 t ~x1 = 0.25 + 4 ~x2 = √ 17 − 0.25 + 4 67 ! ! ,1 e 3− √ 2 17 t . ! ! ,1 e 3− √ 2 17 t . Chapter 4. Systems of ODE Solution: We have the matrix  √ √ √ 3−√17  3+ 17 (0.25 + 417 )e 2 t − 0.25 + 417 e 2 t . √ √ X= 3− 17 3+ 17 t t e 2 e 2 The determinant is then W (t) = det(X) = e 3− √ 2 17 t e 3+ √ 2 17 t (0.25 + √ 17 )+ 4 √ 17 0.25 + 4 !! 6= 0. nc hó n Of course, there are other two situations depending on the character of the eigenvalues, say, complex roots and double roots. In the case of complex eigenvalues, the eigenvectors will be complex too. The linear combination of these two complex vectors is a complex valued solution y (t) = c1 χ ~ ~ 1 ez1 t + c2 χ ~ 2 ez2 t . er oBe li [*** As we are interested in real solutions, we fix one of the terms χ ~ i ezi t and take its real and imaginary parts to form our general solution: ra n ~x(t) = c1 Re χ ~ 1 ez1 t + c2 Im χ ~ 1 ez1 t . R. G ***] Example 4.4. Find the general solution to AF Tv where 2- ~x′ = A~x, A= 0 −1 1 0 . DR Solution: To find the eigenvalues, we have to solve the second order polynomial equation λ2 + 1 = 0. The solutions are λ1 = i, λ2 = −i To find the first eigenvector, we have to solve the system −i −1 x1 = 0, 1 −i x2 finding −ix1 = x2 . Consequently, the first eigenvector is (1, −i). 68 4.2. System of ODEs To find the second eigenvector, we have to solve the system i −1 x1 = 0, 1 i x2 finding ix1 = x2 . Consequently, the second eigenvector is (1, i). We take i = 1 and consider the function (1, −i)eit = (cos(t) + i sin(t)) (1, −i) = (cos(t) + i sin(t), − cos(t)i + sin(t)) , n so, nc hó (1, −i)eit = (cos(t), sin(t)) + i (sin(t), − cos(t)) er oBe li We define the function ~x(t) = c1 Re (1, −i)eit + c2 Im (1, −i)eit , ra n and, substituting, G ~x(t) = c1 (cos(t), sin(t)) + c2 (sin(t), − cos(t)) . 2- R. Example 4.5. Check that AF Tv ~x(t) = c1 (cos(t), sin(t)) + c2 (sin(t), − cos(t)) . is a solution to the system DR with ~x′ = A~x, A= 0 −1 1 0 . Solution: We have ~x′ = c1 (− sin(t), cos(t)) + c2 (cos(t), sin(t)) . We compute A~x = c1 − sin(t) cos(t) + c2 cos(t) sin(t) Example 4.6. Compute the wronskian corresponding to ~x1 = (cos(t), sin(t)) , ~x2 = (sin(t), − cos(t)) . 69 . Chapter 4. Systems of ODE Solution: X= cos(t) sin(t) sin(t) − cos(t) . The determinant is then W (t) = det(X) = −1. This example shows that, by taking the real and imaginary parts, we can form a general solution to a given system of ODEs. Example 4.7. Find the general solution to A= 2 −1 1 0 . er oBe li Solution: The equation for the eigenvalues in this case is nc hó where n ~x′ = A~x, p(λ) = −λ(2 − λ) + 1 = λ2 − 2λ + 1 = (λ − 1)2 = 0, ra n so, we only find one eigenvalue λ = 1. R. G The eigenvector corresponding to this eigenvalue is a solution to AF Tv so, we can take 2- (2 − 1)ξ1 − ξ2 = 0, ξ~1 = (1, 1). DR We guess that the second solution has the form ~x2 = ξ~1 teλ1 t + ~η eλ1 t . Using this latter expression into the equation, we find and ~x′2 = λ1 ξ~1 teλ1 t + ξ~1 eλ1 t + λ1 ~η eλ1 t , (4.1) A~x2 = Aξ~1 teλ1 t + A~η eλ1 t . (4.2) As ~x2 should be a solution of the system ~x′2 = A~x2 , we find that λ1 ξ1 teλ1 t = Aξ1 teλ1 t , ξ~1 eλ1 t + λ1 ~η eλ1 t = A~η eλ1 t . 70 (4.3) 4.2. System of ODEs [*** Notice that this pair of equations is obtained by matching the terms with equal powers of t. Equivalently, you may take t = 0 in (4.1) and (4.1) to obtain equation (4.3) and then the other equation follows. ***] We rewrite equation (4.3) as vecξ1 = (A − λ1 Id)~η, so 1 = η1 − η2 . Consequently, we can choose η = (2, 1). nc hó ~x = c1 ξ~1 eλ1 t + c2 ξ~1 teλ1 t + ~η eλ1 t . n Then we have the general solution given by er oBe li Example 4.8. Check that ~x = c1 ξ~1 eλ1 t + c2 ξ~1 teλ1 t + ~η eλ1 t . ra n is a solution to the system G ~x′ = A~x, R. with Solution: We have AF Tv 2- A= 2 −1 1 0 . DR ~x′ = c1 λ1 ξ~1 eλ1 t + c2 λ1 ξ~1 teλ1 t + ξ~1 eλ1 t + λ1 ~η eλ1 t . We compute A~x = c1 Aξ~1 eλ1 t + c2 Aξ~1 teλ1 t + A~ηeλ1 t . Using the definition of A, ξ~ and ~ η , we have ~ λ1 t . A~x = c1 λ1 ξ~1 eλ1 t + c2 λ1 ξ~1 teλ1 t + (λ1 ~η + ξ)e Example 4.9. Compute the wronskian corresponding to ~x1 = (1, 1)et , ~x2 = (1, 1)tet + (2, 1)et . 71 Chapter 4. Systems of ODE Solution: X= et tet + 2et et tet + et . The determinant is then W (t) = det(X) = e2t (t + 1 − t − 2) = −e2t . nc hó ~x2 = ~v eλt n [*** For a system of ODE’s we need a term ηeλ1 t that did not appear when solving a single ODE’s. The reason for that is that, as we are now in 2 (or more) dimensions, the linear map ~v = ~at + ~b generally requires ~a and ~b to be linearly independent. Then we construct our second solution as er oBe li for the appropriate choice of ~a and ~b. Notice that in one dimension (so a, b are numbers), the linear map v = at + b gives us ra n y2 = atert + bert , G being the first solution R. y1 = ert . AF Tv 2- Consequently, the term with b add no extra information, and it can be absorbed by y1 . This does NOT happen in several dimensions due to the linear independence of the vectors ~a and ~b. So, the term with ~b adds useful extra information that we can NOT forget. ***] DR As we can expect, when dealing with the non-homogeneous system ~x′ = A~x + f~(t), we should proceed as with a single, second order ODE. First we obtain the general solution to the homogeneous system ~x′h = A~xh , ~xh = c1 ~x1 eλ1 t + c2 ~x2 eλ2 t . Then we have to find a particular solution of the non-homogeneous problem ~xp . To find this particular solution we can use the previously mentioned methods: undetermined coefficients and variation of parameters. 72 4.3. The matrix exponential 4.3 The matrix exponential Let’s fix a number α ∈ R. Then, by Taylor’s Theorem, we have eα = ∞ X αk k=0 k! . Notice that this definition seems well-defined as long as αk makes sense. In particular, we can make sense of ∞ X Ak , eA = k! k=0 nc hó n for a square matrix A. The square matrix eA obtained in this way is called the matrix exponential. By analogy with first order, single ODE, we have that ~x(t) = eAt ~x0 , er oBe li is a solution of ~x′ (t) = A~x(t), ~x(0) = ~x0 . Recall that for a diagonal matrix, ra n a11 0 0 a22 G A= R. we have A ea11 0 DR AF Tv 2- e = 73 0 ea22 , . DR AF Tv 2R. G ra n er oBe li nc hó n Chapter 4. Systems of ODE 74
© Copyright 2025 Paperzz