We've been developing intuitive high-quality statistical software at an affordable price, backed up by fast friendly customer service for over 25 years...
Office hours: 9am-5pm GMT Mon-Fri
It's Tuesday 5:02 PM and the office is closed - e-mail us and we'll get back to you as soon as possible.
This procedure is available in the Analyse-it Method Evaluation edition
Passing-Bablok compares two analytical methods, a test method against a reference/comparative method, to determine analytical accuracy.
The requirements of the test are:
Data in existing Excel worksheets can be used and should be arranged in the List dataset layout. The dataset must contain at least two continuous scale variables containing the observations for each method. If replicates are observed then a List dataset with repeat/replicate measures layout should be used to arrange the replicates for each method.
When entering new data we recommend using New Dataset to create a new method comparison dataset.
To start the test:
Excel 97, 2000, 2002 & 2003:
Select any cell in the range containing the dataset to analyse, then click Analyse on the Analyse-it toolbar, click Method comparison then click Passing-Bablok.
The report shows the number of cases analysed, and, if applicable, how many cases were excluded due to missing values.
Constant and proportional bias are shown next. When two methods produce equivalent results constant bias will be zero and proportional bias will be one. Confidence intervals show the range that likely contains the true constant and proportional bias.
The scatter plot (see below) shows the observations of reference/comparative method (X) plotted against the test method (Y). The Use replicates option determines how replicates for each method, if available, are plotted.
Beneath the scatter plot is a residual plot (see below) of the difference of test method from the fit.
Bias can be determined for up to three decision levels.
To determine bias at specific decision levels:
An additional table appears above the scatter plot showing the bias at each decision level, with confidence interval.
Bias can be compared against a bias performance goal. The allowable bias can be specified in absolute units of the analyte, as a percentage of analyte concentration, or as a combination of the two in which case the larger of the absolute and percentage concentration is used.
To compare bias against a goal:
If decision levels are specified the bias goal at each decision level is shown for comparison against the observed bias.
If the Allowable Errors bands option is checked the scatter plot shows the allowable bias (see below). The confidence interval around the fitted linear line should fall within the allowable bias band if the methods are comparable within allowable bias.
Bias can be compared against a systematic error% of a total allowable error goal. The total allowable error can be specified in absolute units of the analyte, as a percentage of analyte concentration, or as a combination of the two in which case the larger of the absolute and percentage concentration is used.
To compare bias against a systematic error% of total allowable error:
If the Allowable Errors bands option is checked the scatter plot shows the allowable bias (see above). The confidence interval around the fitted linear line should fall within the allowable bias band if the methods are comparable within allowable bias.
A CUSUM plot and CUSUM linearity test can be shown to help judge the linearity of the method.
To assess linearity of the method:
A CUSUM linearity test determines if the residuals are randomly distributed around the fitted line. A significant p-value indicates the method is non-linear.
The linearity plot (see below) visually shows the running total of the number of observations above (counted as +1) and below (counted as -1) the fitted line. Ideally there should be roughly equal numbers of observations above and below the zero line, with the line roughly about zero. If clusters of observations form on either side of the zero line the method may be non-linear.