Error Propagation A typical measurement actually involves measuring one or more independent variables from which the experimenter attempts to deduce a dependent variable. Denote the independent variables by x1, x2, . . . and the dependent variable by y. There exists a specific relation between the independent variables and y: y = f ( x1, x 2 ,K) (1) The xs represent measurements, and so they have experimental errors. That is, each x is equal to its actual (unknown) value plus ! an error that you should be able to estimate. The question is how to estimate the error in y. Write x i = x i ± "x i # y = y ± "y = f ( x1 ± "x1, x 2 ± "x 2 ,K) (2) where the overbar denotes the true value. If there were no error, y = f ( x1, x 2 ,K) . In a welldesigned experiment, the true value should not differ much from the measured value. This ! should suggest a way to get at the error. We can approximate the function by its Taylor series in ! the independent variables, viz. $ #f y = y ± "y = f ( x1, x 2 ,K) ± & & #x1 "x1 = 0,"x % ' $ )"x1 ± & #f ) & #x 2 = 0,K ( "x1 = 0,"x % ' )"x 2 + L ) = 0,K ( (3) The true value is then given by y = y ± "y . The measured value is given in terms of the measured ! x values by equation (1), so that the true value in terms of the measurements is y = f ( x1, x 2 ,K) ± "y . ! The error in y is clearly related to the errors in the xs, but we cannot simply deduce the error ! directly from equation (3). In a general measurement system some of the errors will be positive and some will be negative. We don’t know in advance what the signs of the errors will be in any given experiment, so we need a way to combine the errors that takes account of this. There are two methods: add the absolute values of the errors or take the square root of the sum of the squares of the errors. The first method gives a larger result than the second, and it is used when the errors are correlated. The second method is used when the errors are independent. People usually assume the errors to be independent, even when this is not strictly true. You should discuss the independence of the errors if you use the second method. Consider Poiseuille’s measurement of viscosity as an example of error propagation. We have from the document on Poiseuille’s experiments µ= "#pD 4 T 64V0 L (4) Everything on the right hand side of this equation except π and 64 has to be measured, and therefore have errors. We can plug ! this into equation (3). µ ± "µ = #$pD 4 T #D 4 T #$pD3T #$pD 4 #$pD 4 T #$pD4 T ± " ($p) ± 4 "D ± "T ± " V ± "L (5) 0 64V0 L 64V0 L 64V0 L 64V0 L 64V02 L 64V0 L2 We can rewrite this to put everything in terms of relative error, writing µ from equation (1) ! µ ± "µ = µ ± µ " (#p) "D "T "V "L ± 4µ ±µ ±µ 0 ±µ #p D T V0 L (6) and the two error estimation methods give ! correlated errors: "µ " (#p) "D "T "V0 "L = +4 + + + µ #p D T V0 L (7) independent errors: ! 1/ 2 2 2 $ "D ' 2 $ "T ' 2 $ "V0 ' $ "L ' 2 ., "µ *,$ " (#p) ' = +& ) + 16& ) + & ) + & ) +& ) / % D ( % T ( % V0 ( % L ( ,0 µ ,-% #p ( (8) We can write the general version of equation (8) as follows ! 2 2 2 2 .,1/ 2 "y *,$ 1 #f ' $ "x1 ' $ 1 #f ' $ "x 2 ' = +& ) & ) +& )& ) + L/ y ,-% f #x1 ( % x1 ( % f #x 2 ( % x 2 ( ,0 (9) The partial derivatives are to be evaluated at the measured values of the independent variables. Equation (9) is fundamental for error propagation. It allows you to find the contribution of all ! the measurement errors to the deduced variable. In the Poiseuille example, the viscosity measurement is highly dependent on the diameter of the tube, and care should be taken to measure the diameter with high precision. One can argue the independence of his measurements by considering how the measurements must have been made. I leave this to the reader as an exercise.
© Copyright 2026 Paperzz