Qualitative (Sensitivity/Specificity) examines the performance of a diagnostic test and it's ability to correctly identify non-normal (abnormal/diseased) cases. Specific decision levels (medical decision points) can be evaluated if the diagnostic test is measured on a continuous scale.
The requirements of the test are:
Diagnostic test performance can be evaluated when only the true state and diagnosis, determined by the diagnostic test, is known for each case.
Data in existing Excel worksheets can be used and should be arranged in the List dataset layout, or alternatively the number of cases correctly & incorrectly identified can be summarised as a 2x2 table dataset containing counts. Any names can be used for the positive and negative states, e.g. Abnormal/Normal, as long as the true state and test diagnosis use the same state names.
When entering new data we recommend using New Dataset to create a new test performance or 2x2 contingency table dataset.
To start the test:
Excel 97, 2000, 2002 & 2003: Select any cell in the range containing the dataset to analyse, then click Analyse on the Analyse-it toolbar, click Test performance then click Qualitative (Sensitivity / Specificity).
The report shows the number of observations analysed, and, if applicable, how many missing values were excluded. The table shows number of actual positive and negative cases against the number identified by the diagnostic test.
Sample prevalence shows the proportion of positive cases (as indicated by true classification) in the sample and is used to calculate predictive values and correct- and mis-classification rates. Predictive values use the population prevalence, if entered.
Sensitivity TP (true positive), Specificity TN (true negative), FP (false positive) and FN (false negative) proportions and confidence intervals, with positive and negative likelihood ratios, describe the performance of the test.
When the diagnostic test is observed on a continuous scale individual decision levels (medical decision points) can be evaluated. Potential decision levels can be determined from a ROC analysis.
Data in existing Excel worksheets can be used and should be arranged in a List dataset layout containing a nominal variable indicating the true state, and a continuous variable containing the observations of the diagnostic test for each case.
When entering new data we recommend using New Dataset to create a new test performance dataset.
To start:
The report shows the number of observations analysed, and, if applicable, how many missing values were listwise excluded.
A table shows number of true normal and non-normal cases against those identified by the diagnostic test at the chosen decision level. Sensitivity, specificity, positive & negative predictive values and efficiency show the performance of the diagnostic test.
When evaluating a test the subjects studied are usually hand selected to ensure the sample contains enough normal and abnormal cases to properly evaluate the performance of the test. However in the population at large the incidence or prevalence of the non-normal state will probably be very different to that in the sample. In most populations the condition will be rarer than the hand selected sample, causing a small false-positive rate in the sample to be magnified when applied to the population (more so than a small false-negative rate).
Positive & negative predictive values, the probability of the test correctly diagnosing the state of a case, use the prevalence (when specified) to better express the performance of the test in the population.
To enter the prevalence of the condition in the population:
The positive and negative predictive values will now show the ability of the diagnostic test to correctly identify positive (non-normal) and negative (normal) cases drawn at random from the population.