Nonlinear curve fitting and the Charpy impact test: statistical, mathematical, and physical considerations. Kristin Yeager Senior Honors Research Project in Applied Mathematics PAGE 1 OF 32 1.
THE CHARPY TEST The safe operation of steel pressure vessels relies on the material’s fracture toughness, or ability to withstand a sudden, forceful impact. In recent years, tests of fracture toughness have become increasingly advanced, but also prohibitively expensive. Instead, the most common way of assessing fracture toughness is a test that has been in use for over 100 years – the Charpy impact test. 1.1. Testing standards In the United States, the American Society for Testing and Materials (ASTM) is the governing body that sets the requirements for the Charpy impact test. The document ASTM E23, “Standard Test Methods for Notched Bar Impact Testing of Metallic Materials”, establishes the criteria for all aspects of the test, including: • Charpy machine specs • Charpy machine calibration • Shape, size, and geometry of Charpy test specimens • Data that must be collected during the test • Data that must be reported with the test results The calibration requirements for Charpy machines are set by the National Institute for Standards and Measurement (NIST). The NIST owns and maintains the three “master Charpy machines”, whose output sets the standard for all other Charpy machines in the United States [1]. Every Charpy machine in the United States must undergo indirect verification testing biannually as a part of their calibration. The NIST sends out Charpy samples with a known average impact energy. The laboratory performs Charpy tests on the specimens; if the average impact energy of the sample determined by the machine is not within an acceptable tolerance of the known average, the lab must re‐calibrate their machine and re‐take the verification test. This ensures that all machines are producing uniform, comparable results. The American Petroleum Institute (API) and the American Society of Mechanical Engineers (ASME) jointly publish a document API 579‐1/ASME FFS‐1, Fitness for Service. This is a recommended, supplementary document that covers the assessment of steel pressure vessels. Section F.4 in particular addresses the analysis of Charpy impact test data. 1.2. Test procedure The basic impact test apparatus is a nearly frictionless pendulum hammer. The specimen – a notched, rectangular bar of a steel alloy – is mounted in a vice on the apparatus. The pendulum hammer is raised to a known height ℎ and released, striking and breaking the specimen. The hammer swings through the specimen until it reaches the maximum height of its arc ℎ′, which is recorded. Since fracture toughness varies with temperature, this process is repeated at several temperatures. Typically, three of these measurements are collected at each temperature. PAGE 2 OF 32 1.2.1. Test results ASTM E23 requires that all experiments include the following information along with the Charpy test results: • The type of alloy, • The chemical composition of the alloy, • The alloy’s heat treatments (if applicable), and • The dimensions of the specimen. 1.2.1.1.
Absorbed impact energy The initial height of the hammer ℎ and its arc height ℎ′ are recorded during the test. The potential energy of the hammer at ℎ is 𝐹 = 𝑚𝑔ℎ. When the hammer strikes the specimen, part of its total energy is lost upon impact. Assuming no energy loss due to friction, the difference in the hammer’s potential energy at ℎ and ℎ′ is the amount of energy that was transferred to the specimen in the impact. This quantity is called the absorbed energy or impact energy of the specimen: (1.1) Y = mg (h − h ') At sufficiently low and sufficiently high temperatures, the amount of energy the alloy can absorb reaches a plateau. The regions of plateau behavior are referred to as the lower shelf and the upper shelf, respectively; they are also sometimes collectively referred to as side constraints. Between them lies the transition region, wherein the impact energy of the specimens increases dramatically with respect to temperature. This behavior forms an S‐shaped curve, which experimenters are able to use to interpolate the amount of energy the alloy will be able to absorb at a given temperature. 1.2.1.2.
Fracture features Visual inspection of the broken specimen’s fracture features produces a qualitative measure of how the specimen fractured at a particular temperature. A brittle metal will have a crystalline, smooth fracture; a ductile metal will have a jagged, uneven fracture surface. Specimens at the extreme ends of the spectrum display purely brittle or purely ductile fracture features, but in between, the specimens display a mixture of the two. The percent ductile fracture is a qualitative measure of the proportion of the total fracture features that are ductile. This information can help an evaluator understand outlying measurements and identify the range of temperatures where a steel pressure vessel runs the risk of failure due to brittle fracture. Within the transition region, there are three particular “benchmark” temperatures corresponding to certain fracture types. • Reference transition temperature (RTT) ‐ the temperature where a broken specimen exhibits 90% ductile and 10% brittle fracture features; corresponds to the right endpoint of the transition region; PAGE 3 OF 32 •
•
Nil ductility temperature (NDT) – the temperature where a broken specimen exhibits 10% ductile and 90% brittle fracture features; corresponds to the left endpoint of the transition region; Ductile‐brittle transition temperature (DBTT) or fracture appearance transition temperature (FATT) – the temperature where a broken specimen exhibits 50% ductile and 50% brittle fracture features; corresponds to the inflection point of the curve. Advances in computer imaging now produce highly accurate measures of fracture features. However, the ASTM E23 has discouraged the use of the fracture features in analysis as recently as 1981. This is because the ductility was assessed by visually comparing the specimen to a standard set example images of what a specimen should look at 10% ductility, 50% ductility, etc. This method was deemed too subjective to be consistent, so its use was discouraged. Even today, these measurements are optional, and not widely reported. 1.2.2. Specimen requirements The dimension of the specimen and the geometry of its notch directly affect how much energy the specimen absorbs and how it fractures. ASTM E23 requires that a specimen be 55mm long with a 10mm x 10mm cross section. A 45° notch with a radius of .25mm must be machined 2mm into the specimen on the side opposite the impact point. Specimens that do not fit these criteria behave differently, and while the physical effects of the deviations are known, only specimens sharing the same dimensions may be compared. This is why such stringent standards exist for uniform specimens and machine validation; it is necessary to be able to make data comparisons that were generated from different operators, machines, and labs. 1.2.2.1. Cross‐sectional area A specimen with a cross‐sectional area smaller than 10mm x 10mm reduces the amount of energy the specimen can absorb, and it is also less likely to have a cleavage fracture than a full‐
sized specimen at the same temperature [4]. This implies that the overall curve is shifted to the left. It is possible, but uncommon, to use sub‐size specimens for a test. Any data obtained using sub‐
sized specimens cannot be directly compared to data using standard sized specimens. If sub‐
size specimens are used, corrective measures must be applied to account for the shift in the curve. These measures may be found in ASTM E23 Annex A3 and API 579 Section F.4.3.2. This procedure is generally limited to situations where very little material is available to create samples, i.e., samples taken from a nuclear reactor pressure vessel. 1.2.2.2. Notch The depth, angle, and radius of the notch affect how ductile or brittle the specimen is. Prior to using notched specimens, the solid rectangular bars used in impact tests were effective only for brittle metals. Solid ductile specimens had difficulty breaking, and would bend instead. The introduction of the notched specimen corrected this, but also revealed that at temperatures where solid bar specimens had displayed ductile fracture, the notched specimens would instead exhibit brittle fracture [2]. PAGE 4 OF 32 1.2.2.3. Temperature Specimens must be kept at a tightly controlled temperature, and must be broken within five seconds of being removed from the controlled environment. After five seconds, the temperature of the specimen is not within an acceptable tolerance; the amount of impact energy that can be absorbed will decrease, and the specimen will not be comparable to measurements made within the valid range. Table 1. Units of measurement for quantities relevant to impact testing. Metric Imperial Temperature Celsius ℃ Fahrenheit ℉ Impact Energy Joules 𝐽 Foot‐Pounds 𝑓𝑡‐𝑙𝑏𝑠 Fracture Features Fracture Toughness 𝑢𝑛𝑖𝑡𝑙𝑒𝑠𝑠 Megapascal square root meter 𝑀𝑃𝑎 𝑚 Kilopound force square root inch 𝑘𝑠𝑖 𝑖𝑛 1.3. Correlation with fracture toughness Fracture toughness and absorbed impact energy are not equivalent quantities. It is important to understand that the results of the Charpy test are not the fracture toughness of the alloy. Rather, to determine fracture toughness, the results of the Charpy test must be correlated with the fracture toughness. API 579 lists several formulas compute this correlation. The correlation formulas require the temperature corresponding to a measurement of 20 ft‐lbs / 27 Joules [4]. Since it is unlikely that the experimenter will pick the exact temperature corresponding to this energy when performing the test, a curve must be fit to the data and used to interpolate the temperature at a given energy value. 1.4. Model selection In order to make any sort of correlation between the test results and fracture toughness, a curve must first be fit to the test data – but there is no clear answer as to which curve should be used. Although absorbed impact energy with respect to temperature predictably follows an S‐
shaped curve, there is no general consensus about a “best” model. Several well‐known functions, such as the hyperbolic tangent function, seem to adequately describe the relationship of impact energy to temperature, but each of these models has its own limitations. For example, the hyperbolic tangent model assumes that the curve is symmetrical about the inflection point, but empirically, this is not always the case with Charpy data [7]. Some argue that an idealized curve should not be used at all [6], since fracture behavior is so specific to the material itself. The curves in Table 2 have been suggested as models for Charpy impact test data. Some of them simply imitate the S‐shaped behavior, while others incorporate the effects of chemical PAGE 5 OF 32 composition or radiation exposure in the model. Note that this is by no means a definitive list, since the Charpy test is constantly being refined as new techniques are developed. Table 2. Models used to curve‐fit Charpy data. •
•
•
•
•
Hyperbolic tangent Sigmoid curves Three parameter Weibull curve Three parameter Burr curve Cubic splines and other interpolated polynomials The comparison of models is beyond the scope of this paper, as it requires a large number of test cases with well‐defined behavior to make any sort of generalization about when to use a particular model. For simplicity’s sake, this paper uses the hyperbolic tangent model exclusively. The rationale for this is • It is a well‐known model that the reader is likely to encounter in both literature and in practice, • It has been applied and tested extensively, so its strengths and weaknesses are known, and • The graphical interpretation of the parameters is simple. 1.5. Applications In industry, the Charpy test is used for quality control, comparison of materials, and monitoring steel pressure vessels’ susceptibility to cracking. For this reason, the area of greatest interest is usually the transition region; engineers want to know at what temperature they must maintain the vessel when doing repairs. Additionally, when selecting an alloy for a building project, there is only so much benefit that can be obtained from “purifying” the steel before the costs outweigh the benefits. The nuclear industry must take additional precautions against the change in a pressure vessel’s fracture toughness due to irradiation. Neutron damage lowers the fracture toughness of the metal. Graphically, this corresponds to a right shift in the energy‐temperature curve, an increase in the width of the transition region, and a decrease in the upper shelf energy [5]. The change in toughness is monitored using Charpy tests every few months by observing the change in the temperature corresponding to some index energy. Theoretically, a decrease in toughness would increase the temperature corresponding to the index over time. The fracture features of the specimens are especially useful in this situation, since they provide further information on the severity of toughness loss. PAGE 6 OF 32 2. MATHEMATICAL AND STATISTICAL TOOLS After the Charpy data is collected, a curve is fit through the data, using either nonlinear regression or a cubic spline. Outlier analysis is performed on the data using heuristics about the material, descriptive statistics and/or hypothesis testing; if applicable, the outliers are removed and the curve fit is redone. A statistician with little to no background on the Charpy test is at a disadvantage when analyzing Charpy data; this is also true of engineers with limited knowledge of statistics. Current statistical software is able to compute dozens of high‐level processes in seconds, making it easy to haphazardly run a regression or perform dozens of hypothesis tests without understanding their meaning. This section serves as a brief overview of the mathematical and statistical tools that are used for curve fitting. It also introduces some restrictions that are unique to the statistical analysis of Charpy data. 2.1. Nonlinear minimization The function 𝑓(𝜷) to be minimized is referred to as the objective function. The general minimization problem is: h ( β) = 0
(2.1) min f ( β ) subject to i
β
gi ( β ) ≥ 0
This is called constrained minimization [8]. The equality constraints ℎ! (𝜷) and inequality constraints 𝑔! (𝜷) place restrictions on the values that some or all of the parameters may take. When no constraints are placed on the parameters, the problem is called unconstrained minimization. When 𝑓 𝜷 is linear, a unique solution exists to the minimization problem. However, when the parameters are not linear, there is no closed form solution – the solution space of (2.1) can have both local and global minima. Depending on where the algorithm begins searching for a solution, it may converge to a local minimum rather than the global minimum. Because of this, it is highly important to select starting values that are as close to the suspected minimum as possible. Many different nonlinear optimization algorithms exist, but all can be classified as a genetic algorithm, quasi‐Newton method, conjugate gradient method, or simulated annealing. Simulated annealing and genetic algorithms are used to optimize very large systems with many local minima. All of the models in Table 2 are simple enough to use conjugate gradient or quasi‐
Newton methods. In statistical computing packages, the most commonly used nonlinear optimization algorithms are Gauss‐Newton, Levenberg‐Marquardt, conjugate gradient, and Broyden‐Fletcher‐Goldfarb‐Shanno (BFGS). The primary source of difficulty when computing the minimum is the calculation of the Jacobian: PAGE 7 OF 32 ⎡δ y ⎤
J = ⎢ i ⎥ ⎣⎢ δ x j ⎦⎥
(2.2) ⎡ δ 2 yi ⎤
H =⎢
⎥ ⎢⎣ δ xiδ x j ⎥⎦
(2.3) and the Hessian: This is because many objective functions are difficult or impossible to differentiate. While it is possible to numerically approximate the derivatives, it is computationally expensive for large systems. In general, the explicit derivatives give a small boost in accuracy over numerical approximations. The complexity of the objective function should dictate which algorithm to use; it should be neither too simple nor too advanced for the chosen model. 2.2. Regression The idea behind regression is that every measureable quantity can be broken down into a deterministic (exact) and stochastic (error) component, and that all natural quantities tend towards their average. This can be written mathematically as (2.4) Yi = f xi , β j + ε i (
)
where • 𝑥! is a predictor variable, • 𝑌! is the response variable, • 𝑓 𝑥! , 𝛽! is the mean, or expected value of 𝑌! at 𝑥! , denoted 𝐸 𝑌! , and • 𝜀! is the residual, or “error” term, which corresponds to the difference between the observed measurement and the expected measurement. The actual population parameters 𝛽! in the function 𝑓 𝑥! , 𝛽! cannot be directly measured; they can only be estimated from the sample data. When investigators examine a cross‐section of the total population, they must make the assumption that their sample is representative of the population as a whole. They must determine the unbiased sample estimates 𝑏! for the population parameters 𝛽! such that E{b j } = β j . The most common way to find these estimates is ordinary least squares regression (OLS). OLS seeks the values of 𝑏! that minimize the least squares criterion: n
(
)
Q = ∑ Yi − f ( xi , b j ) i =1
2
(2.5) OLS makes four important assumptions. Violations of these assumptions have consequences that can affect the validity or interpretation of the regression results. Some violations have PAGE 8 OF 32 more severe consequences than others, and certain tests are more robust to particular violations than others. These assumptions are: • The expected value of the residuals 𝐸 𝜀! is zero, and the variance of the residuals 𝜎 ! 𝜀! is constant. (The variance is the squared distance of the observed value to the expected value.) • The variation in 𝑌 is not due to any particular x‐value 𝑥 ∗ . This condition implies that the variance of the residuals must be constant across all values of x. If the residuals satisfy this condition, they are said to be homoscedastic. When the variance is not constant, it is said to be heteroscedastic. The term homogeneity of variance is also used to describe this requirement. • The error terms and independent, random variables. There is no pattern to the error terms, and the error terms are not correlated. • The error terms are normally distributed. Table 3. Assumptions of OLS regression. 1.
2.
3.
4.
𝐸 𝜀! = 0 and 𝜎 ! 𝜀! = 𝜎 ! 𝜎 ! ! | ! ∗ = 𝜎 ! ! 𝜀! is an independent, random variable. 𝜀! is normally distributed. The homogeneity of variance is arguably the most important criterion to satisfy. Violations of this assumption imply that the parameter estimates are biased ( E{b j } ≠ β j ). If the residuals are found to be heteroscedastic, the evaluator must find some correction that will stabilize the variance. One possible way to do this is to use weighted least squares regression. Weighted least squares seeks to minimize the criterion n
2
⎛1⎞
(2.6) Qw = ∑ ⎜ 2 ⎟ Yi − f ( xi , b j ) i =1 ⎝ si ⎠
!
where 𝑠!! is the sample variance. The weighting factor 𝑤! = !! assigns more importance to (
)
!
points with smaller deviations, reducing the effect that outliers have on the curve fit. 2.3. Setting up the regression using the hyperbolic tangent model API 579 designates the equation ⎛T − D ⎞
(2.7) Y = A + B tanh ⎜
⎟ ⎝ C ⎠
to curve fit Charpy data. Note that the variable 𝑇 represents temperature, and 𝑌 represents the impact energy. The model has four parameters: • A, the vertical position of the transition region midpoint, • B, the vertical distance between point A and the upper and lower shelves, • C, one half the width of the transition region, and PAGE 9 OF 32 •
D, the horizontal position of the transition region midpoint. Figure 1. Graphical interpretation of parameters in (2.7). The graphical interpretation of these parameters makes it simple to estimate starting values from the data: ∑ Yi
A= i
n
max (Yi ) − min(Yi )
B=
(2.8) 2
max (Ti ) − min(Ti )
2
D = median(Ti )
C=
An alternative model, used by MPM Technologies in their CharpyFit™ software [9], is ⎛ T − a4 ⎞ ⎞ a2 ⎛
⎛ T − a4 ⎞ ⎞
a ⎛
(2.9) Y = 1 ⎜⎜1 − tanh ⎜
⎟ ⎟⎟ + ⎜⎜1 + tanh ⎜
⎟ ⎟⎟ 2⎝
⎝ a3 ⎠ ⎠ 2 ⎝
⎝ a3 ⎠ ⎠
The four model parameters are: • 𝑎! , the lower shelf energy value, • 𝑎! , the upper shelf energy value, • 𝑎! , the temperature coinciding with the inflection point of the hyperbolic tangent, and • 𝑎! , the width of the transition region. PAGE 10 OF 32 Figure 2. Graphical interpretation of parameters in (2.9). The median of T can be used as a starting value estimate for 𝑎! , but the rest of the parameters do not have simple catch‐all rules to estimate them. Both min(Y) and max(Y) are poor estimates for 𝑎! and 𝑎! , respectively, because of the amount of scatter associated with Charpy data. The transition region width parameter 𝑎! is also difficult to estimate via scatterplot, since the actual “transition” may happen at a temperature that falls between where the samples are collected. For very brittle metals, the unconstrained minimization of the least squares functions may extrapolate negative impact energies, especially if the alloy is especially brittle (has an impact energy between 1 – 10 Joules). In that case, constrained optimization algorithms must be used. Because the parameters are easy to visualize, the constraint functions necessary for the data set at hand can be estimated from a scatterplot. Typically, constraints are placed on the shelf values; it is less common to see constraints placed on the width of the transition region, and even rarer to see constraints on the position of the inflection point of the curve. It is important to use as few constraints as possible, since it is possible for them to inadvertently interact. 2.4. Identification of outliers In practical situations, what is considered “clean” Charpy data can still display a large amount of scatter. Because of this, it is crucial that multiple tests be performed at each temperature, and under the same circumstances. The Charpy test has dozens of potential causes for deviation [6], and has the additional complication of being comparative. Repeated measurements at each temperature help the evaluator determine if the deviation was random, due to machine or operator error, or due to variation in the material. In general, three repetitions at each temperature is a minimum. Trials with fewer than three readings per temperature seriously limit the reliability of the fitted curve. Empirical comparisons of curve fits when limiting the number of trials per temperature have suggested that the estimated width of the transition region is larger when fewer readings are taken [7]. PAGE 11 OF 32 This conclusion was also reached when using fewer test temperatures across identical ranges [7]. In many practical situations – when the actual operating temperatures are assumed not to drop below a certain temperature – data is not collected across a large enough range of temperatures to determine the lower shelf or NDT. It is erroneous for the evaluator to assume that the minimum energy value recorded corresponds to the lower shelf. Unless the lower shelf is known or an assessment of the ductile fracture features is provided, the data is not guaranteed to fully represent the transition region. Even if the lower shelf is known, many of the common models are not representative in this situation because the NDT cannot be ascertained. Hence, the estimated inflection point of the transition region is not reliable. Additionally, accurate analysis of outliers is critically dependent on the evaluator’s familiarity with the material, the machining process, heat or chemical treatments, the position of the specimen inside the furnace during heat treatment, and the part of the vessel the sample was taken from. There are many engineering heuristics pertaining to the material itself that can be used to judge whether a measurement is reasonable or not. Analyses done without this information can lead to wildly different conclusions. Section 3.1 gives an example of how a simple heuristic about heat treatments drastically improves the fitted curve. 2.5. Assessing the fit of the model Diagnostic measures are performed to ensure that the regression assumptions are met, the model fits the data, and the model makes sense for the physical problem. The diagnostic step of forming a regression model is undoubtedly the most difficult, especially for nonlinear regression. Many popular diagnostic measures, such as the coefficient of determination, 𝑅! , and Analysis of Variance (ANOVA), are often misunderstood or used incorrectly. The physical interpretation of Charpy data also places additional restrictions on the diagnostic measures that can be used. This section will define the most common diagnostic measures, address common errors, and establish which measures are appropriate for use with Charpy data. 2.5.1. Hypothesis testing and p‐values Statistical hypothesis testing is the crux of scientific experimentation. However, all too often, the correct interpretation of the test results is lost in oversimplification. It is important to have a clear picture of what hypothesis tests do and what sort of information they provide in order to use them correctly. All hypothesis tests have a null hypothesis (denoted 𝐻! ) and an alternative hypothesis (denoted 𝐻! ). 𝐻! is some statement that is initially assumed to be true, while 𝐻! is a dichotomous statement that must be true if 𝐻! is false. To test the hypotheses, we form some test statistic from the observed data. The test statistic is compared to the value we would expect to see in that situation, assuming that the test statistic follows a certain probability distribution. All hypothesis tests carry an inherent risk of drawing the wrong conclusion. There are two types of errors: PAGE 12 OF 32 •
•
Type I errors – rejecting the null hypothesis when it is true (a false positive) Type II errors – accepting the null hypothesis when it is false (a false negative) The decision rule about whether to accept or reject 𝐻! is formed using p‐values. The p‐value of a test is the probability (0 < p < 1) of making a type I error, assuming that the null hypothesis is true [11]. The level of significance α determines the degree of certainty the evaluator wishes to have about the conclusion of the test. If the p‐value is greater than or equal to α , we fail to reject the null hypothesis; if the p‐value is less than α , we reject the null hypothesis and accept the alternative hypothesis. We say that we are (1 − α )100% confident about this result. The most common misconception about hypothesis testing is that rejecting the null hypothesis is equivalent to proving that the alternative hypothesis is true. This is inherently incorrect, because no hypothesis test can be used to actually prove anything – it only indicates that the event is unusual enough to be statistically significant. 2.5.2. R2 𝑅! is a single statistic that describes the proportion of variance in 𝑌 that is explained by the addition of 𝑥 to the model. There are a number of formulas for calculating 𝑅! , but the most basic is: n
R =
∑ [( x − x )( y − y )]
i
i =1
2
n
i
n
∑ [( x − x ) ] ⋅ ∑ [( y − y ) ]
i =1
i
i =1
i
=
sxy
sxx s yy
(2.10) At the introductory level, 𝑅! is often erroneously interpreted as “the percent of variance that is explained by the model”. More accurately, 𝑅! is the proportion of variance explained by the model when compared to a “null” (constant) model. Put simply, it measures if the model describes the data better than a horizontal line would. Because of this, 𝑅! is easily inflated by the addition of “junk” information (such as unnecessary predictor variables or invalid parameter values), since it increases the chances of finding any relationship, significant or not. 𝑅! also tends to be larger when the range of 𝑥 values is large [10]; therefore, 𝑅! is not a useful tool to assess goodness‐of‐fit. 𝑅! also lacks meaning in nonlinear models. It is nonsensical to compare an inherently nonlinear model with a null model. 𝑅! should not be used with nonlinear regression, and should only be used for linear regression if supplemented with more descriptive measures of fit. 2.5.3. ANOVA Analysis of Variance (ANOVA), like 𝑅! , describes how well the model explains the variation in 𝑌 with respect to 𝑥. ANOVA is actually a hypothesis test that the parameters 𝛽! are significantly different from 0. ANOVA is a special type of hypothesis test, called the F‐test, which compares the proportion of variability contributed by separate components in the model. The p‐values of PAGE 13 OF 32 the test are obtained from the F‐distribution. An ANOVA table is used to organize the calculations and results of the procedure. Table 4. ANOVA table for a simple linear regression model of the form: Yi = β0 + β1 xi + ε i under the hypotheses Source H 0 : β1 = 0
H A : β1 = 0
Degrees of Freedom 1
Sum of Squares 𝑑𝑓 𝑅 = 1 𝑆𝑆𝑅 =
Error 𝑑𝑓 𝐸 = 𝑛 − 2 𝑆𝑆𝐸 =
Total 𝑑𝑓 𝑇 = 𝑛 − 1 Regression !
!
𝑌! − 𝑌 Mean Square 𝑆𝑆𝑅
𝑀𝑆𝑅 =
𝑑𝑓(𝑅)
!
𝑌! − 𝑌! !
𝑀𝑆𝐸 =
𝑆𝑆𝐸
𝑑𝑓(𝐸)
𝑆𝑆𝑇 = 𝑆𝑆𝑅 + 𝑆𝑆𝐸 F 𝐹∗ =
𝑀𝑆𝑅
𝑀𝑆𝐸
𝐹∗ =
𝑀𝑆𝑅
𝑀𝑆𝐸
Table 5. ANOVA table for a multiple linear regression model of the form: Yi = β 0 + β1 xi1 + β 2 xi 2 + ... + β p xip + ε i Under the hypotheses Source Degrees of Freedom H 0 : β1 = β 2 = ... = β p = 0
H A : At least one β j ≠ 0
Sum of Squares 𝑑𝑓 𝑅 = 𝑝 − 1 𝑆𝑆𝑅 =
Error 𝑑𝑓 𝐸 = 𝑛 − 𝑝 − 2 𝑆𝑆𝐸 =
Total 𝑑𝑓 𝑇 = 𝑛 − 1 Regression !
!
𝑌! − 𝑌 Mean Square 𝑆𝑆𝑅
𝑀𝑆𝑅 =
𝑑𝑓(𝑅)
!
!
𝑌! − 𝑌! 𝑆𝑆𝑇 = 𝑆𝑆𝑅 + 𝑆𝑆𝐸 𝑀𝑆𝐸 =
𝑆𝑆𝐸
𝑑𝑓(𝐸)
F To determine if the test statistic is significant, 𝐹 ∗ is compared to the critical value of F (α ;ν1 = df R ,ν 2 = df E ) . This value is found in an F Table or using statistical software. The p‐
value of the test corresponds to the area under the curve to the right of the critical value. We reject 𝐻! if F * > F (α ;ν 1 = df R ,ν 2 = df E ) or the p‐value is less than α . The test statistic 𝐹 ∗ is the ratio of the sum of squares of regression to the sum of squared error. It may take on any value greater than 0. If the numerator is greater than the denominator, then the sum of squares of regression is greater than the sum of squares of error. This implies that more of the variance is explained by the model than it is by chance. Therefore, the larger the F‐statistic is, the better the model fits the data. 1
𝑌! is the fitted value of 𝑌 at 𝑥! using the regression model. PAGE 14 OF 32 The interpretation of a large 𝐹 ∗ is complicated when the model lacks an intercept, or is nonlinear. ANOVA assumes that the model includes an intercept term, so the interpretation of test results is mired if this is not the case. Additionally, the hypothesis of ANOVA for regression assumes that the parameters are related to each other linearly. By definition, then, it is meaningless to apply this hypothesis to a nonlinear model. 2.5.4. Transformations When the residuals of a nonlinear model are non‐normal or heteroscedastic, one possible corrective method is to transform the data so that it is approximately linear. Transformations can be applied to the predictor variable(s), the regressor variable, or both, depending on the violation. Transformations work especially well on “intrinsically linear” functions, such as logarithms and exponentials. However, transforming Charpy data to be linear loses a great deal of information about the width, steepness, and inflection point of the transition region. For Charpy data, the first corrective measure for non‐normality or heteroscedasticity should be outlier analysis, followed by weighted least squares regression. 2.5.5. Residuals Graphical measures utilizing the residuals are the most reliable form of regression diagnostics, especially for nonlinear regression. The residuals may be plotted against the predictor variable, the fitted values, or with respect to time. If the residuals are both homoscedastic and normally distributed, they will form an even band about 0. Heteroscedasticity is indicated by a pattern in the residuals – the spread may increase or decrease as x varies. The scatterplot may have a “trumpet” shape. Non‐normality can be caused by just one very severe outlier; it may also result as a consequence of heteroscedasticity. For this reason, corrective measures should first be applied to stabilize the variance, since it may automatically correct the issue of non‐
normality. If a scatterplot of the residuals suggests non‐normality, a hypothesis test may be used to confirm or refute the suspicion. There are many tests of normality, each with their own strengths and weaknesses. Two of the most commonly seen tests are the Shapiro‐Wilk test and the Kolmogorov‐Smirnov test. • The Shapiro‐Wilk test (S‐W test) specifically tests if the data is normally distributed. The hypotheses it tests are o 𝐻! : The sample data x1 , x2 ,..., xn came from a normally distributed population. o 𝐻! : The sample data did not come from a normally distributed population. • The Kolmogorov‐Smirnov test (K‐S test) can test the likelihood that a particular sample came from a reference distribution (in our case, the normal distribution), or it can test the likelihood that two sets of sample data come from the same distribution. The K‐S test is a less powerful test of normality than the Shapiro‐Wilk test, but has the advantage of being nonparametric ‐ that is, the distribution is determined from the data, rather than the data PAGE 15 OF 32 3.
being assumed to fit a certain distribution. Nonparametric methods are more robust to violations of the assumptions. o 𝐻! : The data come from the same population / distribution. o 𝐻! : The data come from different populations / distributions. CASE STUDIES The following case studies were chosen to illustrate ways to handle “unusual” data. Case 3.1 illustrates the necessity of engineering background knowledge to identify outliers in Charpy data. Case 3.2 shows the importance of collecting data at temperatures in the shelves. It also demonstrates how to use the ductile fracture percent information to help fit the curve. The Broyden‐Fletcher‐Goldfarb‐Shanno (BFGS) algorithm was used for unconstrained optimization, and the Constrained Optimization by Linear Approximation (COBYLA) algorithm was used for constrained minimization. The programs were written in the Python language using the SciPy, NumPy, matplotlib, and Pylab packages. The code is listed in the 5.2.1. 3.1. Cold rolled vs. hot rolled steels Cold rolled (CR) and hot rolled (HR) steels are alloys that have been heat‐treated to enhance certain strengths. Cold rolling steel increases the yield strength of the material, but decreases its fracture toughness. This means that the material is better able to withstand slow, heavy loading, but is weak against sudden shocks. Conversely, hot rolling steel improves its fracture toughness, but not its yield strength. Hence, hot rolled steels are expected to perform “better” (that is, absorb a greater amount of energy) than cold rolled steels on the Charpy test. 3.1.1. Data collection Undergraduate students in a mechanical engineering lab course at The University of Akron were assigned samples of either 1045 HR or 1045 CR steel and asked to conduct Charpy impact tests at eight specific temperatures. The size of the metal samples met the ASTM E23 standards as described in section 1.2.2. Each student produced one full Charpy trial. Equation (2.7) was chosen as the model curve. To find the curve fits for this data, the objective function 2
⎛
⎛
⎛ T − D ⎞⎞⎞
Q = ∑ ⎜ Yi − ⎜ A + B tanh ⎜ i
⎟⎟⎟ ⎝ C ⎠⎠⎠
i =1 ⎝
⎝
n
(3.1) was minimized using the unconstrained BFGS algorithm. Figure 3 suggests that the upper shelf of the CR steel is greater than the upper shelf of the HR steel, and the lower shelf of the CR steel is less than the lower shelf of the HR steel. Given what we know about the relative strengths and weaknesses of the heat treatments, this is counter to what is expected. Clearly, the outliers present in both samples are adversely influencing the curve fits. PAGE 16 OF 32 Figure 3. Scatterplot of and initial curve fits to 1045 CR and 1045 HR steel data. 3.1.2. Analysis of hot rolled steels In Figures 3 and 4, we see that there is a clearly influential outlier at ‐180°C. The Shapiro‐Wilk test of normality supports the conclusion that the residuals are not normally distributed. We justify the removal of this outlier on two counts: it is much larger is than the other two measurements at that temperature, and it clashes strongly against the behavior of the rest of the curve. Removing this outlier fixes the issue of non‐normality in the residuals. Figure 4. Residuals of initial 1045 HR steel curve fit. PAGE 17 OF 32 It is generally expected that for all Charpy tests, the range of impact energy values at any one temperature is 15 – 25 ft‐lbs, so none of the remaining points in Figure 4 can reasonably be omitted. However, it appears that the energy reading at 95°C in the first trial could possibly be an outlier, since it falls well below the energy readings at 95°C in the other two HR steel trials. It also falls within the range of the CR steels at 95°C, which should have lower fracture toughness than HR steels. 3.1.3. Analysis of cold rolled steels Figures 3 and 5 indicate that there are several large outliers in the data. These outliers are severe enough that the residuals fail the Shapiro‐Wilk test of normality. Because cold rolled steels have inferior fracture toughness to that of hot rolled steels, it is very likely that any impact energy values above the upper shelf determined for the hot rolled steels (approximately 37 ft‐lbs) are due to error, and not the material. It is also unlikely that the impact energy of the cold rolled steel is larger than the average value of the hot rolled steel at any one temperature in or around the upper shelf. Removing the observations that meet these criteria corrects the non‐normality of the residuals. Figure 5. Residuals of initial 1045 CR steel curve fit. 3.1.4. Summary of results Initially, the data suggested the physically improbable condition that the upper shelf of the CR steel was higher than that of the HR steel. By carefully removing outliers, we were able to find curves that obey the physical constraints of the material; in Figure 6, we see that the predicted upper shelf of the HR steel is above that of the CR steel. PAGE 18 OF 32 We also see in Figure 6 that the width of the transition region is vastly different for the two different metals. Barring pure error, there is a physical reason why this might be the case. The purity of the metal affects the steepness of the transition region, so it is possible that during the heat treatment process, the HR steel was not scrubbed of excess junk material, but the CR steel was. Table 6. Regression results of 1045 HR and CR steel trials. A B Data set Figure Start End Start 1045 HR steels, pooled 3 20.438 23.408 15.0 1045 HR steels, pooled, 6 20.108 22.243 15.00 outlier removed 1045 CR steels, pooled 3 20.21 23.04 23.50 1045 CR steels, pooled, 6 16.33 18.64 7.88 outliers removed End 11.123 C Start 180.0 End 48.062 D Start ‐39.64 End 20.529 Normal 2
residuals? No (p=.007) 14.381 15.00 87.969 .061 17.035 Yes (p=.996) 12.48 180. 26.47 ‐39.64 10.35 No (p=.003) 7.89 9.01 9.01 .346 0.345 Yes (p=.156) Figure 6. Comparison of curve fits to adjusted 1045 CR and 1045 HR steel data. 3.2. Using ductile fracture percent data This example illustrates the usefulness of the qualitative fracture feature information. In this case, the experimenters were only interested in the behavior of the transition region, so the data set lacks definition in the upper and lower shelves. Because the shelves define parameters in both hyperbolic tangent models (2.7) and (2.9), the unconstrained optimization model is unable to converge in this example. 2
P‐value of the Shapiro‐Wilk test for the normality of the residuals. PAGE 19 OF 32 To circumvent this issue, the percent ductile fracture features can be used to select parameter constraints and starting values. Since the NDT (where the specimen displays 10% ductile features) corresponds to the left endpoint of the transition region, and the RTT (where the specimen displays 90% ductile features) corresponds to the right endpoint of the transition region, it can be assumed that specimens with less than 10% ductile fracture or greater than 90% ductile fracture belong to the lower and upper shelves, respectively. This knowledge can be applied to select appropriate linear constraints. 3.2.1. Data collection 2.25Cr‐1Mo steel specimens from the inner surface of a pressure vessel were subjected to a Charpy test. In addition to the impact energy, the percent ductile fracture features were determined for each specimen. 3.2.2. Analysis Equation (2.9) was used as the model curve for this data. The objective function 2
⎛
⎛a ⎛
⎛ T − a ⎞⎞ a ⎛
⎛ T − a ⎞⎞⎞⎞
Q = ∑ ⎜ Yi − ⎜ 1 ⎜⎜1 − tanh ⎜ i 4 ⎟ ⎟⎟ + 2 ⎜⎜1 + tanh ⎜ i 4 ⎟ ⎟⎟ ⎟ ⎟ ⎜2
⎜
i =1
⎝ a3 ⎠ ⎠ 2 ⎝
⎝ a3 ⎠ ⎠ ⎟⎠ ⎟⎠
⎝ ⎝
⎝
n
(3.2) was first minimized using the unconstrained BFGS optimization algorithm. The initial attempts to use unconstrained optimization were unsuccessful because of the lack of data points defining the upper and lower shelves. Regardless of how the starting values are specified, the unconstrained optimization algorithm is unable to converge in this situation because there is insufficient information to suggest that the hyperbolic tangent model even applies. The parameter estimates generated by this method are meaningless, since they suggest that negative impact energy is possible. PAGE 20 OF 32 Figure 7. Plot of percent ductile fracture versus temperature of 2.25Cr‐1Mo steel data. However, a scatterplot of the ductile fracture percent data versus the temperature does reveal shelf‐like behavior at the extreme temperatures. The hyperbolic tangent model appears to be reasonable, but it will be necessary to restrict the upper and lower shelf parameters a2 and a1 . Using the constraints a1 ≥ 0
max(Y ) − a2 ≥ 0
(3.3) a new curve can be fit to the data using the COBYLA algorithm which satisfies the constraints. A plot of the new residuals shows that the constrained curve fit satisfies the criteria of being homoscedastic and normally distributed. Table 7. Regression results using constrained optimization on 2.25Cr‐1Mo steel data. A1 A2 A3 A4 Data set Figure Start End Start End Start End Start 2.25Cr‐1Mo steel 8 19.00 12.10 221.5 223 100. 74.91 3
Normal residuals? End 103.3 93.82 Yes (p=.688) 3
P‐value of the Shapiro‐Wilk test for the normality of the residuals. PAGE 21 OF 32 Figure 8. Constrained curve fit to 2.25Cr‐1Mo steel data. Figure 9. Residuals of constrained curve fit to 2.25Cr‐1Mo steel data. PAGE 22 OF 32 4.
CONCLUSION Despite its age, the Charpy test continues to be the most widely used method of assessing fracture toughness. However, the test is only as reliable as its interpretation, which requires both engineering expertise and an understanding of statistics. Better data analysis results from fully understanding the alloy’s properties, the physical aspects governing the Charpy test, and the meaning of statistical tests. By integrating engineering heuristics, advanced optimization algorithms, data‐specific models, and appropriate diagnostic measures, the usefulness and reliability of the curve fit are greatly improved. PAGE 23 OF 32 5.
APPENDICES 5.1. Tables Table 8. Results of Charpy V‐notch impact test of 1045 HR steel. ‐180°F ‐80°F ‐55°F ‐40°F 0°F 25°F 95°F 180°F Impact Energy (ft‐lbs) Impact Energy (ft‐lbs) Impact Energy (ft‐lbs) 28 8 9 12.5 11 10 14 9 14 13 10 16 26 15 17 28 21 26 26 36 31 37 35 38 0°F 16 18 21 25°F 24 29 35 95°F 28 28 55 180°F 24 26 52 Table 9. Results of Charpy V‐notch impact test of 1045 CR steel. ‐180°F ‐80°F ‐55°F ‐40°F Impact Energy (ft‐lbs) 9 10 10 10 Impact Energy (ft‐lbs) 13 12 12 13 Impact Energy (ft‐lbs) 8 10 11 11 Table 10. Results of Charpy V‐notch impact test of 2.25Cr‐1Mo steel pressure vessel inner surface samples. Temperature (°C) Impact Energy (J) % Ductile Fracture 180 220 100 170 223 100 150 180 90 150 167 90 130 171 90 120 174 70 120 127 70 110 151 80 100 137 60 100 122 50 100 109 45 85 78 30 80 105 45 70 80 20 50 88 25 40 54 10 25 55 5 25 35 5 20 48 5 10 25 0 ‐5 25 0 ‐20 7 0 PAGE 24 OF 32 5.2. Code listing 5.2.1. tan.py ########################################################### Calculates the starting values in (2.8) from the data. Uses the unconstrained BFGS algorithm to minimize equation (3.1). ########################################################### from scipy import optimize, stats from numpy import * from pylab import * ########################################################### Functions definitions ########################################################### def getStarting(T,Y): #‐‐‐Returns starting value estimates for ASTM E23 tanh model. A=sum(Y)/len(Y) B=(max(Y)‐min(Y))/2. C=(max(T)‐min(T))/2. D=stats.cmedian(T) x0=[A, B, C, D] return x0 def func(x): #‐‐‐ Hyperbolic tangent function passed to the optimization routine. func=((x[0]+x[1]*tanh((T‐x[3])/x[2]))) return func def this(c,x): #‐‐‐ Hyperbolic tangent function used for plotting. this=((c[0]+c[1]*tanh((x‐c[3])/c[2]))) return this def lsqs(x): #‐‐‐ Least squares function passed to the optimization routine. lsqs=math.fsum((Y‐func(x))**2) return lsqs def residuals(p,q,r): #‐‐‐ Calculates the residuals given the arrays r, p, and q: r the fitted values, p the parameters, and q the abscissas. return r ‐ this(p,q) PAGE 25 OF 32 def results(): #‐‐‐ Print the results of the regression. print "" print "Starting values x0 =", sval print "" print "Final values x0 = ", x0 print "" print "Residuals" print resid print "" print "Shapiro‐Wilk test for normality of residuals (p‐value):", pval print "" print "‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐" print "" print "" return ########################################################### Read in data ########################################################### a=loadtxt('crsteel.txt') T=a[0,:] n=len(T) Y1=a[1,:] Y2=a[2,:] Y3=a[3,:] Y4=a[4,:] Y5=a[5,:] Y6=a[6,:] T3=append(T,append(T,T)) CR=append(Y1,append(Y2,Y3)) HR=append(Y4,append(Y5,Y6)) ########################################################### Analyze pooled CR, HR trials ########################################################### for i in range(2): T=T3 if i==0: Y=CR print "‐‐‐‐‐‐‐‐‐‐‐‐‐‐", "Analysis of 1045 CR Steels", "‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐" PAGE 26 OF 32 else: Y=HR print "‐‐‐‐‐‐‐‐‐‐‐‐‐‐", "Analysis of 1045 HR Steels", "‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐" print "Temperatures (F)", T print "Impact Energy (ft‐lbs)", Y sval=getStarting(T,Y) x0=optimize.fmin_bfgs(lsqs,sval) resid=residuals(x0,T,Y) normal=stats.shapiro(resid) pval=normal[1] results() if i==0: cr_resid=resid cr_vals=x0 else: hr_resid=resid hr_vals=x0 PAGE 27 OF 32 5.2.2. crmoalt.py ########################################################### Calculates starting value estimates from the ductile fracture percent data. Uses the COBYLA algorithm to minimize equation (3.2) subject to (3.3). ########################################################### from scipy import optimize, stats from numpy import * from pylab import * T = loadtxt('crmo2_temp_C.txt') Y = loadtxt('crmo2_en_J.txt') D = loadtxt('crmo2_ductilefracturepercent.txt') ########################################################### Function definitions ########################################################### def this(c,x): #‐‐‐ Hyperbolic tangent function used for plotting. this=(c[0]/2.)*(1‐tanh((x‐c[3])/c[2])) + (c[1]/2.)*(1+tanh((x‐c[3])/c[2])) return this def func(x): #‐‐‐ Hyperbolic tangent function passed to the optimization routine. func=(x[0]/2.)*(1‐tanh((T‐x[3])/x[2])) + (x[1]/2.)*(1+tanh((T‐x[3])/x[2])) return func def lsqs(x): #‐‐‐ Least squares function passed to the optimization routine. lsqs=math.fsum((Y‐func(x))**2) return lsqs def residuals(p,q,r): #‐‐‐ Calculates the residuals given the arrays r, p, and q: r the fitted values, p the parameters, and q the abscissas. return r ‐ this(p,q) def results(): #‐‐‐ Print the results of the regression. print "" print "Starting values x0 =", sval print "" print "Final values x0 = ", x0 print "" PAGE 28 OF 32 print "Residuals" print resid print "" print "Shapiro‐Wilk test for normality of residuals (p‐value):", pval print "" print "‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐" print "" print "" return def mincon(x): #‐‐‐ Constraint function (3.3) passed to optimization algorithm. return x[0] def maxcon(x): #‐‐‐ Constraint function (3.3) passed to optimization algorithm. return max(Y) ‐ x[1] ########################################################### Calculate starting values ########################################################### n=len(T) ndt=0 rtt=0 dbtt=0 ndtcount=0 rttcount=0 dbttcount=0 nul=0 nulcount=0 h=0 hcount=0 for i in range(0,n,1): if D[i]==0: nul=Y[i]+nul nulcount=nulcount+1 elif D[i]==100: h=Y[i]+h hcount=hcount+1 elif D[i]==10: ndt=T[i]+ndt ndtcount=ndtcount+1 elif D[i]==50: dbtt=T[i]+dbtt PAGE 29 OF 32 dbttcount=dbttcount+1 elif D[i]==90: rtt=T[i]+rtt rttcount=rttcount+1 ndt=ndt/ndtcount rtt=rtt/rttcount dbtt=dbtt/dbttcount h=h/hcount nul=nul/nulcount A=nul #lower shelf B=h #upper shelf C=dbtt #inflection point DD=abs(rtt‐ndt) #width of transition region sval=[A, B, C, DD] ########################################################### Perform regression, calculate residuals, test the normality of the residuals, print results ########################################################### print "‐‐‐‐‐‐‐‐‐‐‐‐‐‐", "Analysis of 2.25Cr‐1Mo Steels", "‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐" print "Temperatures (C)", T print "Impact Energy (J)", Y print "Ductile features (%)", D print "NDT = ", ndt print "RTT = ", rtt print "DBTT = ", dbtt print "" x0=optimize.fmin_cobyla(lsqs, sval, cons=[mincon, maxcon], rhoend=1e‐7, maxfun=1000) resid=residuals(x0,T,Y) normal=stats.shapiro(resid) pval=normal[1] results() PAGE 30 OF 32 6.
ACKNOWLEDGEMENTS The data in section 3.2 was used with permission from The Equity Engineering Group. The author would like to thank the following individuals for sharing their expertise on materials science, fracture mechanics, and the Charpy test: Robert Yeager, Jeff Brubaker, and The Equity Engineering Group, especially: David Osage, Charles Panzarella, James Leta, and Jeremy Janelle. The author would also like to thank Dr. Tirumali Srivatsan and Manigandan Kannan in the Department of Mechanical Engineering at The University of Akron, OH for compiling the Charpy test data found in section 3.1, and for their counsel on aspects of the Charpy test. All figures in this document were created in Python and edited in PaintShopPro 6 by the author.
PAGE 31 OF 32 7.
BIBLIOGRAPHY [1] C.N. McCowan, T.A. Siewert, and D.P. Vigliotti. The NIST Charpy V‐Notch Verification Program: Overview and Operating Procedures. NIST, Materials Reliability Devision. [2] T.A. Siewert, M.P. Manahan, C.N. McCowan, J.M. Holt, F.J. Marsh, and E.A. Ruth. The History and Importance of Impact Testing. Pendulum Impact Testing: A Century of Progress, ASTM STP 1380, T.A. Siewert and M. P. Manahan, Sr., Eds., American Society for Testing and Materials, West Conshohocken, Pennsylvania, 1999. [3] ASTM Standard E23, 2007ae1, “Standard Test Methods for Notched Bar Impact Testing of Metallic Materials”, ASTM International, West Conshohocken, PA, 2007, DOI: 10.1520/E0023‐
07AE01, www.astm.org. [4] API 579 / ASME FFS‐1 Fitness for Service. 2nd edition, June 2007. [5] M. P. Manahan, Jr., C. N. McCowan, and M.P. Manahan, Sr. Percent Shear Area Determination in Charpy Impact Testing, Journal of ASTM International, January 2008. [6] A. L. Lowe, Jr. Factors influencing the accuracy of Charpy impact test data. The Charpy Impact Test: Factors and Variables, ASTM STP 1072, John M. Holt, Editor, American Society for Testing and Materials, Philadelphia, 1990. [7] Hyung‐Seop Shin, Jong‐Seo Park, and Hae‐Moo Lee. Curve fitting in the transition region of Charpy impact data. International Journal of Modern Physics B, Vol. 22, Nos. 9, 10, & 11 (2008), pp. 1496‐1503. [8] David G. Luenberger. Introduction to Linear and Nonlinear Programming. Addison‐Wesley Publishing Company, Reading, MA, 1973. [9] MPM Technologies. http://www.mpmtechnologies.com/CharpyFit.htm [10] M. H. Kutner, C.J. Nachtsheim, J. Neter, and W. Li. Applied Linear Statistical Models, McGraw‐Hill/Irwin, New York, New York, 2005. [11] Robert H. Carver and Jane Gradwohl Nash. Doing Data Analysis with SPSS Version 16. Brooks/Cole, Cengage Learning, Belmont, CA, 2009. PAGE 32 OF 32
© Copyright 2026 Paperzz