Examples from Notes for Chapter 2 of Heinz Math 191T: Mathematical Modeling, Spring 2017 Dr. Doreen De Leon Examples for Notes for Section 2.3 of Text Example 3.1 (1.1.1) (1.1.2) 4600.66666666667 (1.1.3) (1.1.4) 53.6666666666667 (1.1.5) (1.1.6) 78.1111111111111 (1.1.7) 408.703703703704 (1.1.8) (1.1.9) 3175.22222222222 (1.1.10) 295.111111111111 (1.1.11) (1.1.12) 6707.66666666667 (1.1.13) 606.320987654321 (1.1.14) 0.966194346491293 (1.1.15) 0.966194346491291 (1.1.16) Example 3.2 Look at an example of height vs. age. The data is copied from An Introduction to the Mathematics of Biology, with Computer Algebra Models by Yeargers, Shonkwiler, & Herod. (1.2.1) (1.2.2) We can actually find the linear least squares fit using the formulas determined in class: 823 (1.2.3) 49 (1.2.4) 455 (1.2.5) 6485 (1.2.6) 6.464285714 (1.2.7) 72.32142857 (1.2.8) Alternately, we can let Maple find the least squares fit for us. Set up a sequence of points for plotting purposes: (1.2.9) Include the plots and stats libraries. Set up the graph of the points, and then determine the least squares fit. This is done using the fit function, which has as a category leastsquare, which we can further define as the line. The arguments of the fit function are the age and ht sequences. (1.2.10) We can recapture the values of the slope, m, and the y-intercept, b, as follows: 6.46428571428572 (1.2.11) 72.3214285714286 (1.2.12) Finally, store the plot of the line thus determined into another variable, Fit, and display the two plots together. It appears that our model is a close fit. Let's check the relative errors. (1.2.13) Example 3.3 Look at an example of ideal weight vs. height for medium built males. The data is copied from An Introduction to the Mathematics of Biology, with Computer Algebra Models by Yeargers, Shonkwiler, & Herod. (1.3.1) (1.3.2) The bulk of the data appears linear, so let's try a linear least squares fit. (1.3.3) 4.04395604395605 (1.3.4) (1.3.5) The relative error appears to be increasing towards the end of the height range. This tells us that the model needs to be refined. Since we know that in many geometric solids, the volume changes with the cube of the height, we will try to find a cubic fit for the data. (1.3.6) (1.3.7) (1.3.8) (1.3.9) (1.3.10) This appears much better. Let's look at the relative errors. (1.3.11) The residual is much better now, and appears to be decreasing as the height incrases. Can we do better? Suppose we try to find a fit of the form (1.3.12) (1.3.13) (1.3.14) (1.3.15) (1.3.16) (1.3.17) 0.168195232138925 4.68529107456381 108.341803262159 (1.3.18) Problem: Technically, we would prefer that n be an integer (or a rational number). But, let's use the computed value of n, since the mathematics behind this approach does not require an integer (we just took the natural log of both sides to get a linear equation). (1.3.19) (1.3.20) (1.3.21) As mentioned above, ideally n is an integer, or at least a rational number. The rational number closest to n is . So, let's re-do the above using that value of n. (1.3.22) Is this result any better, in terms of the relative error? Let's check. (1.3.23) The results are virtually identical (not surprisingly). The relative error is increasing for some of the values. Why do you think this is? What do you think would happen if we looked at the relative error of the results using the logarithm of the values? Example 3.4 The data below is copied from An Introduction to the Mathematics of Biology, with Computer Algebra Models by Yeargers, Shonkwiler, & Herod. First, set up the data sequences and include the stats library. BMI is defined as the weight (in kg) divided by the square of the height (in m). The data for the height and weight below is given in inches and pounds, respectively. Therefore, in order to determine the BMI, we need to convert the height and weight to their metric equivalents and do the division. (1.4.1) (1.4.2) (1.4.3) (1.4.4) (1.4.5) (1.4.6) We will now compute the least squares fit. Notice that here we don't specify a linear least squres fit. This is because Maple computes a linear least squares fit by default, with the last variable listed (here, c) being the dependent variable. (1.4.7) (1.4.8) Is this a good fit? Let's check it out using sample data not used in the calculations. The subject is 64.5 inches tall, weighs 135 pounds, and has skin-fold that measures 159.9 millimeters. Her true body fat percentage is 30.8. The predicted value is (1.4.9) 32.31524865 (1.4.10) (1.4.11) Examples for Notes for Section 2.4 of Text (2.1) (2.2) (2.3) (2.4) (2.5) Example 4.1 of Notes First, we will do a quadratic fit for the data in Example 4.1 of the notes. (2.1.1) 5.50000000000000 (2.1.2) (2.1.3) 0.106000000000000 (2.1.4) (2.1.5) (2.1.6) Try a power fit of the form y = axn to see if we obtain better results. (2.1.7) 4.71887275817813 (2.1.8) 2.26231660989839 (2.1.9) (2.1.10) (2.1.11) This is clearly much worse than the quadratic polynomial fit at the first point, but it is marginally better at all other points. Example 5.1 of Notes for Section 2.5 of Text In this example, we will attack the same problem, modeling a set of population data, in two different ways. First, we find an exponential fit directly using Maple's Fit function. Second, we transform the data and find a linear least-squares fit, than transform back. Which approach will work better? Or will both give the same results? Direct approach: (3.1) 24.9192325677618 (3.2) (3.3) 0.422738417966150 (3.4) (3.5) After transforming the data: (3.6) 24.9616633683972 (3.7) 0.422375079642857 (3.8) (3.9)
© Copyright 2026 Paperzz