Statistics add-in software for statistical analysis in Excel

22-Apr-2020 Diagnostic accuracy (sensitivity/specificity) versus agreement (PPA/NPA) statistics

Method validation Statistics Using Analyse-it

Due to COVID-19, there is currently a lot of interest surrounding the sensitivity and specificity of a diagnostic test. These terms relate to the accuracy of a test in diagnosing an illness or condition. To calculate these statistics, the true state of the subject, whether the subject does have the illness or condition, must be known.

In recent FDA guidance for laboratories and manufacturers, “FDA Policy for Diagnostic Tests for Coronavirus Disease-2019 during Public Health Emergency”, the FDA state that users should use a clinical agreement study to establish performance characteristics (sensitivity/PPA, specificity/NPA). While the terms sensitivity/specificity are widely known and used, the terms PPA/NPA are not.

Agreement statistics

CLSI EP12: User Protocol for Evaluation of Qualitative Test Performance protocol describes the terms positive percent agreement (PPA) and negative percent agreement (NPA). When you have two binary diagnostic tests to compare, you can use an agreement study to calculate these statistics.

  • Positive agreement is the proportion of comparative/reference method positive results in which the test method result is positive.
  • Negative agreement is the proportion of comparative/reference method negative results in which the test method result is negative.

As you can see, these measures are asymmetric. That is, interchanging the test and comparative methods, and therefore the values of b and c, changes the statistics. They do, however, have a natural, simple, interpretation when one method is a reference/comparative method and the other a test method.

What agreement statistics don’t tell us

Although the formulae for positive and negative agreement are identical to those for sensitivity/specificity, it is essential to distinguish them as the interpretation is different.

We have seen product information for a COVID-19 rapid test use the terms ‘relative’ sensitivity and ‘relative’ specificity when comparing against another test. The term ‘relative’ is a misnomer. It implies that you can use these 'relative' measures to calculate the sensitivity/specificity of the new test based on the sensitivity/specificity of the comparative test. That is simply not possible.

It is also not possible, from these statistics, to determine that one test is better than another. Recently a national UK newspaper ran an article about a PCR test developed by Public Health England and the fact it disagreed with a new commercial test in 35 out of 1144 samples (3%). Of course, to many journalists, this was evidence that the PHE test was inaccurate. There is no way to know which test is correct and which incorrect in any of those 35 disagreements. We simply do not know the true state of the subject in agreement studies. Only by further investigation of those disagreements would it be possible to identify the reason for the discrepancies.

To avoid confusion, we recommend you always use the terms positive agreement (PPA) and negative agreement (NPA) when describing the agreement of such tests.

In the next blog post, we show you how to use Analyse-it to perform the agreement test with a worked example.

For more information, see our online documentation:
Agreement measures for binary and semi-quantitative data
Agreement plot

Previous post
COVID-19: Establishing the diagnostic accuracy (sensitivity/specificity) of a test using Analyse-it
Next post
COVID-19: Calculating PPA/NPA agreement measures using Analyse-it

Comments

Comments are now closed.

Tags
  • All (65)
  • Business (10) 
  • Case studies (2) 
  • Excel (6) 
  • In development (16) 
  • Method validation (19) 
  • Plots (2) 
  • Press releases (1) 
  • Publications (1) 
  • Releases (35) 
  • Statistics (17) 
  • Using Analyse-it (31) 
Latest posts
  • A New Year. A New Edition.
  • Analyse-it v6.10: Survival Analysis and other improvements
  • Analyse-it v5.90: Support for the updated CLSI EP6-Ed2 protocol and inverse predictions
  • Analyse-it 5.50 to 5.65: Recent improvements
  • COVID-19: Calculating the detection limit for a SARS-CoV-2 RT-PCR assay using Analyse-it
  • COVID-19: Calculating PPA/NPA agreement measures using Analyse-it
  • Diagnostic accuracy (sensitivity/specificity) versus agreement (PPA/NPA) statistics
  • COVID-19: Establishing the diagnostic accuracy (sensitivity/specificity) of a test using Analyse-it
  • Why the diagnostic test 'accuracy' statistic is useless
  • Sensitivity/Specificity and The Importance of Predictive Values for a COVID-19 test
Most popular posts
  • Announcing the Analyse-it Quality Control & Improvement Edition
  • Analyse-it 4.0 released: Support for CLSI guidelines, and Measurement Systems Analysis
  • Analyse-it 3.80 released: Principal Component Analysis (PCA)
  • Recent improvements in Analyse-it 3.76 and our first video tutorial!
  • Our software development and validation process
  • The numerical accuracy of Analyse-it against the NIST StRD
  • Quantiles, Percentiles: Why so many ways to calculate them?
  • Handbook of Parametric & Non-parametric Statistical procedures
  • A sombre note: Professor Rick Jones
statistics software, statistical software for Excel
  • Products
  • Store 
  • Support
  • Blog
  • About us
  • Download trial
  •  Search
  •  Sign in
  •  Contact us
Analyse-it editions
  • Standard edition
  • Medical edition
  • Method Validation edition
  • Quality Control & Improvement edition
  • Ultimate edition

  • Blog  
  • About us
  • Contact us  
  • Privacy policy


Copyright 2026 Analyse-it Software, Ltd, Leeds, United Kingdom .
We use essential cookies to run the site, and optional analytics to improve the experience for visitors. For more information see our Privacy policy.