Difference plots (also known as Bias plots) visually compare two methods, a test method against a reference/comparative method, for analytical accuracy. Almost any kind of difference plot can produced, including plots recommended by Bland-Altman, Hyltoft Petersen, and the CLSI, to assess agreement and repeatability.
The requirements of the test are:
Data in existing Excel worksheets can be used and should be arranged in the List dataset layout. The dataset must contain at least two continuous scale variables containing the observations for each method. If replicates are observed then a List dataset with repeat/replicate measures layout should be used to arrange the replicates for each method.
When entering new data we recommend using New Dataset to create a new method comparison dataset.
To start the test:
Excel 97, 2000, 2002 & 2003: Select any cell in the range containing the dataset to analyse, then click Analyse on the Analyse-it toolbar, click Method comparison then click Difference plots.
The report shows the number of cases analysed, and, if applicable, how many cases were excluded due to missing values. The name, number of replicates, and repeatability (if measured in duplicate), in terms of SD or CV, depending on the Plot option (see below), of each method is shown. The range of observations (minimum and maximum) for the reference/comparative method is shown.
The scatter plot (see below) shows the observations of reference/comparative method (X) plotted against the test method (Y). The Use replicates option determines how replicates for each method, if available, are plotted.
The difference/bias plot can be configured to produce plots recommended by Bland-Altman, Hyltoft Petersen,or CLSI. A histogram of the differences can also be shown to see the distribution of the differences.
Absolute differences or the difference as a percentage of the analyte concentration can be plotted. The Pearson r correlation statistic on the report shows the correlation between the differences and average value. A high degree of correlation suggests bias is non-constant across the range, and a percent difference plot may be more suitable.
To configure the difference plot:
The difference plot (see below) is shown beneath the scatter plot.
Beneath the difference plot is the histogram of differences (see below), with a normal curve overlay to assist in judging whether the differences are normally distributed.
Bias can be calculated and plotted on the difference plot, to show differences between the methods.
To show and calculate bias:
The difference plot (see below) shows bias, and if enabled, confidence interval showing the range likely to contain the true bias. Bias is also shown on the report above the scatter plot.
Bias is shown, which ideally should be zero, with a confidence interval showing the range likely to contain the true bias. The bias is expressed as a SD or a CV, depending on the Plot option (see above)
Limits of agreement can be calculated and plotted on the difference plot to show the likely range of differences between the methods.
To show and calculate limits of agreement:
The difference plot (see below) shows bias, limits of agreement, and if enabled, confidence intervals showing the range likely to contain the true bias and limits. Large limits of agreement imply poor precision in one or both methods.
Repeatability plots can be shown to assess the imprecision of the methods when observed in replicate.
To show repeatability plots:
Repeatability plots (see below) are shown for the methods with replicate measurements, showing the SD/CV (depending on the Plot option) (Y axis) against the mean of the replicates (X axis).
Bias and imprecision can be compared against a performance goals. The allowable bias & imprecision can be specified in absolute units of the analyte, as a percentage of analyte concentration, or as a combination of the two in which case the larger of the absolute and percentage concentration is used.
To compare bias against a goal:
Total allowable error is calculated bias and imprecision goals. Same as bias + imprecision except the difference plot also shows total allowable error.
The bias and imprecision (if specified) goals are shown with a hypothesis test to determine if the observed bias/imprecision is within the goal. If the bias/imprecision p-value is statistically significant then the observed bias/imprecision is worse than the goal.
If with Allowable Errors bands is checked the difference plot shows the allowable bias (see below). The confidence interval around the fitted linear line should fall within the allowable bias band if the methods are comparable within allowable bias.
Bias can be compared against a systematic error%, and imprecision against a random error%, of a total allowable error goal. The total allowable error can be specified in absolute units of the analyte, as a percentage of analyte concentration, or as a combination of the two in which case the larger of the absolute and percentage concentration is used.
To compare bias against a total allowable error goal:
Difference plot shows total allowable error and the allowable bias range. Observed bias is tested to determine if within the goal.
Difference plot shows total allowable error and the allowable bias range. Observed bias and imprecision are tested to determine if within goal.
The bias and imprecision (if specified) goals are shown, and a hypothesis test determines if the observed bias/imprecision is within the goal. If the bias/imprecision p-value is statistically significant then the observed bias/imprecision is significantly worse than the goal.
If the Allowable Errors bands option is checked the difference plot shows the allowable bias (see above). The confidence interval around the fitted linear line should fall within the allowable bias band if the methods are comparable within allowable bias.