A measurement procedure can be precise, linear, and well-characterised analytically — and still be clinically useless if it can’t reliably distinguish positive from negative cases. Diagnostic performance evaluation answers the question that matters most: how well does the test actually discriminate? Set the threshold too low and you flood clinicians with false positives. Set it too high and you miss disease. The trade-off between sensitivity and specificity depends on the clinical context, and the optimal threshold depends on who you’re testing and what the consequences of misdiagnosis are.
Analyse-it covers both quantitative tests (EP24-A2 ROC analysis with DeLong AUC comparison) and qualitative tests (EP12-A2). Compare up to 10 tests simultaneously, determine optimal thresholds, and produce the diagnostic accuracy evidence for product labelling, regulatory submissions, or publication.
See diagnostic performance results in detail — ROC curves, AUC comparison, threshold determination, and qualitative test evaluation — using CLSI example datasets you can download and follow along with.
EP24-A2 — Appendix DDiagnostic performance is one part of the Method Validation Edition, alongside measurement system analysis, method comparison, and reference intervals.