# Average bias

Bias is a measure of a systematic measurement error, the component of measurement error that remains constant in replicate measurements on the same item. When measuring a method against a reference method using many items the average bias is an estimate of bias that is averaged over all the items.

Bias is the term used when a method is compared against a reference method. When the comparison is not against a reference method but instead another routine comparative laboratory method, it is simply an average difference between methods rather than an average bias. For clarify of writing we will use the term average bias.

The average bias is usually expressed as the constant and proportional bias from a regression procedure, or as a constant or proportional bias from the mean of the differences or relative differences. If there are other sources systematic errors present, such as nonlinearity or interferences, the average bias will be incorrect.

The average bias is an estimate of the true unknown average bias in a single study. If the study were repeated, the estimate would be expected to vary from study to study. Therefore, if a single estimate is compared directly to 0 or compared to the allowable bias the statement is only applicable to the single study. To make inferences about the true unknown bias you must perform a hypothesis test:

- Equality test
The null hypothesis states that the bias is equal to 0, against the alternative hypothesis that it is not equal zero. When the test p-value is small, you can reject the null hypothesis and conclude that the bias is different to zero.

It is important to remember that a statistically significant p-value tells you nothing about the practical importance of what was observed. For a large sample, the bias for a statistically significant hypothesis test may be so small as to be practically useless. Conversely, although there may some evidence of bias, the sample size may be too small for the test to reach statistical significance, and you may miss an opportunity to discover a true meaningful bias. Lack of evidence against the null hypothesis does not mean it has been proven to be true, the belief before you perform the study is that the null hypothesis is true and the purpose is to look for evidence against it. An equality test at the 5% significance level is equivalent to testing if the 95% confidence interval includes zero.

- Equivalence test
The null hypothesis states that the bias is outside an interval of practical equivalence, against the alternative hypothesis that the bias is within the interval considered practically equivalent. When the test p-value is small, you can reject the null hypothesis and conclude that the bias is practically equivalent, and within the specified interval.

An equivalence test is used to prove a bias requirement can be met. The null hypothesis states the methods are not equivalent and looks for evidence that they are in fact equivalent. An equivalence hypothesis test at the 5% significance level is the same as testing if the 90% confidence interval lies within the allowable bias interval.

- What is Analyse-it?
- What's new?
- Administrator's Guide
- User's Guide
- Statistical Reference Guide
- Distribution
- Compare groups
- Compare pairs
- Contingency tables
- Correlation and association
- Principal component analysis (PCA)
- Factor analysis (FA)
- Item reliability
- Fit model
- Method comparison / Agreement
- Correlation coefficient
- Scatter plot
- Fit Y on X
- Fitting ordinary linear regression
- Fitting Deming regression
- Fitting Passing-Bablok regression
- Linearity
- Residual plot
- Checking the assumptions of the fit
- Average bias
- Estimating the bias between methods at a decision level
- Testing commutability of other materials
- Difference plot (Bland-Altman plot)
- Fit differences
- Plotting a difference plot and estimating the average bias
- Limits of agreement (LoA)
- Plotting the Bland-Altman limits of agreement
- Mountain plot (folded CDF plot)
- Plotting a mountain plot
- Partitioning and reducing the measuring interval
- Agreement measures for binary and semi-quantitative data
- Chance corrected agreement measures for binary and semi-quantitative data
- Agreement plot
- Estimating agreement between two binary or semi-quantitative methods
- Study design
- Study design for qualitative methods
- Measurement systems analysis (MSA)
- Reference interval
- Diagnostic performance
- Survival/Reliability
- Control charts
- Process capability
- Pareto analysis
- Study Designs
- Bibliography

Version 6.15

Published 18-Apr-2023