Linear method comparison compares two analytical methods, a test method against a reference/comparative method, to determine analytical accuracy. The procedure uses ordinary least-squares linear regression so does not account for error in the reference/comparative method. This limitation is not a concern when errors in the reference method are very small or are minimised by replicate measurements.
The requirements of the test are:
Data in existing Excel worksheets can be used and should be arranged in the List dataset layout. The dataset must contain at least two continuous scale variables containing the observations for each method. If replicates are observed then a List dataset with repeat/replicate measures layout should be used to arrange the replicates for each method.
When entering new data we recommend using New Dataset to create a new method comparison dataset.
To start the test:
Excel 97, 2000, 2002 & 2003: Select any cell in the range containing the dataset to analyse, then click Analyse on the Analyse-it toolbar, click Method comparison then click Linear fit.
The report shows the number of cases analysed, and, if applicable, how many cases were excluded due to missing values. The name, number of replicates, and repeatability (if measured in duplicate), in terms of SD or CV, depending on the Errors in Test method option, of each method is shown. The range of observations (minimum and maximum) for the reference/comparative method is shown, with the correlation coefficient r to test if the range is adequate (adequate when r > 0.975) for ordinary linear regression.
Syx is a measure of the dispersion of observations around the fitted linear line. If the test method was observed in singlicate Syx gives an estimate of the precision of the test method. When the test method is measured in replicate and the mean of the replicates used, Syx does not estimate precision as some random error is removed by averaging the replicates.
Constant and proportional bias are shown next. When two methods produce equivalent results constant bias will be zero and proportional bias will be one. Confidence intervals show the range that likely contains the true constant and proportional bias and a hypothesis test compares constant and proportional against the ideal values. If the p-value is statistically significant then the bias differs from the ideal value.
The scatter plot (see below) shows the observations of reference/comparative method (X) plotted against the test method (Y). The Use replicates option determines how replicates for each method, if available, are plotted.
Beneath the scatter plot is a residual plot (see below) of the differences between the test method and the linear fit. The residuals are standardized (residual / Syx) so any observations outside ±4 indicate possible outliers.
Bias can be determined for up to three decision levels.
To determine bias at specific decision levels:
An additional table appears above the scatter plot showing the bias at each decision level, with confidence intervals.
Bias can be compared against a bias performance goal. The allowable bias can be specified in absolute units of the analyte, as a percentage of analyte concentration, or as a combination of the two in which case the larger of the absolute and percentage concentration is used.
To compare bias against a goal:
If decision levels are specified the bias goal at each decision level is calculated and a hypothesis test is shown to test if the observed bias is outside goal bias. If the p-value is statistically significant the observed bias is outside the goal.
If the Allowable Errors bands option is checked the scatter plot shows the allowable bias (see below). The confidence interval around the fitted linear line should fall within the allowable bias band if the methods are comparable within allowable bias.
Bias can be compared against a systematic error% of a total allowable error goal. The total allowable error can be specified in absolute units of the analyte, as a percentage of analyte concentration, or as a combination of the two in which case the larger of the absolute and percentage concentration is used.
To compare bias against a systematic error% of total allowable error:
If the Allowable Errors bands option is checked the scatter plot shows the allowable bias (see above). The confidence interval around the fitted linear line should fall within the allowable bias band if the methods are comparable within allowable bias.
Bias can be compared against a manufacturer's performance claim to demonstrate a method is operating correctly.
To compare bias against a manufacturers claim:
Bias and claimed bias at each decision level is shown, with a confidence interval and a hypothesis test to test if the observed bias is different from the claimed bias. If the p-value is statistically significant the observed bias is significantly different from the claimed bias.