![]() |
CALIBRATIONName:
The primary measurement scale is usually the scientifically relevant scale and measurements on this scale are typically more precise (relatively) than measurements on the secondary scale. However, the secondary scale is typically the easier measurement to obtain (i.e., it is typically cheaper or faster or more readily available). So given a measurement on the secondary scale, we want to convert that to an estimate of the measurement on the primary scale. The steps involved are:
Given that in the calibration problem the primary measurement (the higher quality measurement) is assigned to the independent variable(s) (x axis) and the secondary measurement is assigned to the dependent (y axis) variable, a reasonable question is why don't we simply switch the axes and assign the secondary measurement to the independent variable? The reason is that least squares fitting assumes that the values for the indpendent variable are fixed (i.e., there is no error). In order to satisfy this assumption, we need to assign the higher quality measurement to the independent variable. When Dataplot performs a calibration, it first prints out a summary of the initial fit. It then loops through each point being calibrated and prints the estimate for the primary scale and the corresponding confidence limits. Calibration is discussed in the NIST/SEMATECH e-Handbook of Statistical Methods.
The following are some quantities that are used by several methods:
For most of these methods, given a calibration point, Y0, the X0 is estimated from the original fit by
with A0 and A1 denoting the coefficients from the original fit:
Dataplot generates the linear calibration using the following methods:
Additional methods may be supported in future releases. Dataplot generates the quadratic calibration using the following methods:
Additional methods may be supported in future releases.
<SUBSET/EXCEPT/FOR qualification> where <y> is the response variable (secondary measurements); <x> is the independent variable (primary measurements); <y0> is a number, parameter, or variable containing the secondary measurements where the calibration is to be performed; and where the <SUBSET/EXCEPT/FOR qualification> is optional. This syntax computes a linear calibration analysis.
<SUBSET/EXCEPT/FOR qualification> where <y> is the response variable (secondary measurements); <x> is the independent variable (primary measurements); <y0> is a number, parameter, or variable containing the secondary measurements where the calibration is to be performed; and where the <SUBSET/EXCEPT/FOR qualification> is optional. This syntax computes a quadratic calibration analysis.
QUADRATIC CALIBRATION Y X Y0 LINEAR CALIBRATION Y X Y0SUBSET X > 2
The following variables are written to the file dpst1f.dat.
The following variables are written to the file dpst2f.dat.
The following variables are written to the file dpst3f.dat.
The following variables are written to the file dpst4f.dat.
For example, to generate 90% confidence intervals, enter
F. Graybill and H. Iyer. "Regression Anaysis," First Edition, Duxbury Press, pp. 427-431. R. G. Krutchkoff (1967). "Classical and Inverse Methods of Calibration," Technometrics, Vol. 9, pp. 425-439. Neter, Wasserman, and Kuttner. "Applied Linear Statistical Models," Third Edition, Irwin, pp. 173-175. B. Hoadley (1970). "A Bayesian Look at Inverse Linear Regresssion," Journal of the American Statistical Association, Vol. 65, pp. 356-369. H. Scheffe (1973). "A Statistical Theory of Calibration," Annals of Statistics, Vol. 1, pp. 1-37. P. J. Brown (1982). "Multivariate Calibration," (with discussion), JRSBB, Vol. 44, pp. 287-321. A. Racine-Poon (1988). "A Bayesian Approach to Nonlinear Calibration Problems," Journal of the American Statistical Association, Vol. 83, pp. 650-656. C. Osborne (1991). "Statistical Calibration: A Review," International Statistical Review, Vol. 59, pp. 309-336. Hamilton (1992). "Regression with Graphics: A Second Course in Applied Statistics," Duxbury Press.
SKIP 25 READ NATR533.DAT Y X LET Y0 = DATA 150 200 250 300 . LINEAR CALIBRATION Y X X0The following output is generated: Linear Calibration Analysis Summary of Linear Fit Between Y and X Number of Observations: 16 Estimate of Intercept: 13.5058 SD(Intercept): 21.0476 t(Intercept): 0.6416 Estimate of Slope: 0.7902 SD(Slope): 0.0710 t(Slope): 11.1236 Residual Standard Deviation: 26.2077 Linear Calibration Summary Y0 = 150.0000 -------------------------------------------------------------------------- 95% 95% Method X0 Lower Limit Upper Limit -------------------------------------------------------------------------- 1. Inverse Prediction Limits: 172.7309 90.6910 246.3664 2. Graybill-Iyer: 172.7309 90.6910 246.3664 3. Neter-Wasserman-Kutner: 172.7309 96.4652 248.9967 4. Propogation of Error: 172.7309 83.3505 262.1114 5. Inverse (Krutchkoff): 183.7929 106.8873 260.6986 6. Maximum Likelihood: 172.7309 142.7454 194.8587 7. Bootstrap (Residuals): 172.7309 145.7333 192.7617 8. Bootstrap (Data): 172.7309 146.6036 193.8464 -------------------------------------------------------------------------- Y0 = 200.0000 -------------------------------------------------------------------------- 95% 95% Method X0 Lower Limit Upper Limit -------------------------------------------------------------------------- 1. Inverse Prediction Limits: 236.0051 158.9668 309.5252 2. Graybill-Iyer: 236.0051 158.9668 309.5252 3. Neter-Wasserman-Kutner: 236.0051 162.1587 309.8515 4. Propogation of Error: 236.0051 134.6394 337.3707 5. Inverse (Krutchkoff): 240.6357 168.0773 313.1941 6. Maximum Likelihood: 236.0051 216.1080 253.3034 7. Bootstrap (Residuals): 236.0051 217.8168 251.9630 8. Bootstrap (Data): 236.0051 220.3722 254.0829 -------------------------------------------------------------------------- Y0 = 250.0000 -------------------------------------------------------------------------- 95% 95% Method X0 Lower Limit Upper Limit -------------------------------------------------------------------------- 1. Inverse Prediction Limits: 299.2792 225.1548 374.7718 2. Graybill-Iyer: 299.2792 225.1548 374.7718 3. Neter-Wasserman-Kutner: 299.2792 225.8776 372.6809 4. Propogation of Error: 299.2792 185.8826 412.6759 5. Inverse (Krutchkoff): 297.4784 225.7040 369.2529 6. Maximum Likelihood: 299.2792 283.0275 316.7070 7. Bootstrap (Residuals): 299.2792 283.5742 316.0866 8. Bootstrap (Data): 299.2792 283.3301 318.4212 -------------------------------------------------------------------------- Y0 = 300.0000 -------------------------------------------------------------------------- 95% 95% Method X0 Lower Limit Upper Limit -------------------------------------------------------------------------- 1. Inverse Prediction Limits: 362.5533 289.2164 442.1448 2. Graybill-Iyer: 362.5533 289.2164 442.1448 3. Neter-Wasserman-Kutner: 362.5533 287.5867 437.5200 4. Propogation of Error: 362.5533 237.0930 488.0137 5. Inverse (Krutchkoff): 354.3211 279.7673 428.8750 6. Maximum Likelihood: 362.5533 343.2002 387.2145 7. Bootstrap (Residuals): 362.5533 345.3757 384.6494 8. Bootstrap (Data): 362.5533 342.0917 388.3067 --------------------------------------------------------------------------
|
Privacy
Policy/Security Notice
NIST is an agency of the U.S.
Commerce Department.
Date created: 09/09/2010 |