We've been developing intuitive high-quality statistical software at an affordable price, backed up by fast friendly customer service for over 25 years...
Office hours: 9am-5pm GMT Mon-Fri
It's Tuesday 5:02 PM and the office is closed - e-mail us and we'll get back to you as soon as possible.
ROC, or Receiver Operator Characteristic, is used to examine the performance of a diagnostic test over a range of decision levels (medical decision points). Performance is the test's ability to correctly identify positive and negative cases. Individual decision levels can be evaluated using Qualitative (Sensitivity / Specificity).
The requirements of the test are:
Data in existing Excel worksheets can be used and should be arranged in the List dataset layout. The dataset must contain a nominal scale variable indicating the true state of the case, positive or negative, and a continuous scale variable containing the observations of the diagnostic test.
When entering new data we recommend using New Dataset to create a new Test performance dataset.
To start the test:
Excel 97, 2000, 2002 & 2003:
Select any cell in the range containing the dataset to analyse, then click Analyse on the Analyse-it toolbar, click Test performance then click ROC curve.
The report shows the number of observations analysed, how many missing values were listwise excluded, and the number of positive and negative cases.
The area under the curve (AUC) is a measure of the ability of the diagnostic test to correctly identify cases. Diagnostic tests with higher AUCs are generally better and should always be higher than 0.5, indicating the test is better at diagnosing than chance. A hypothesis test is used to statistically test if the diagnostic test is better than chance at correctly diagnosing state. A significant p- value indicates the diagnostic test is better at diagnosing than chance.
The performance of the test at each decision level is evaluated. The tables show the TP rate (true positive rate / sensitivity) and TN rate (true negative rate / specificity) with confidence intervals, and the positive and negative likelihood ratios.
The ROC plot (see below) shows False Positive rate (1-specificity) (X axis), the probability of incorrectly diagnosing a case as positive when it's true state is negative, against True positive rate (sensitivity) (Y axis), the probability of correctly diagnosing a positive case, across all decision levels for the diagnostic test. Ideally the curve will climb quickly toward the top-left meaning the test correctly identifies cases. The diagonal grey line is a guideline for a test that is unable to correctly identifying cases.
Additional secondary axes can be shown on the ROC plot to show True negative rate (specificity) (secondary X axis), the probability of correctly identifying a negative case, and False Negative rate (1-sensitivity) (secondary Y axis), the probability of incorrectly diagnosing a case as negative when it's true state is positive.
To change the axes shown on the ROC plot:
When evaluating the results of a diagnostic test we are usually interested in the probability of the classification been correct given the test result.
Predictive values are affected by the prevalence of the condition in the population. When evaluating test performance the cases studied are usually hand selected to ensure the sample contains enough normal and abnormal cases to fully evaluate the test's performance. Therefore the prevalence is not representative of the population and the predictive values would be too optimistic. To calculate the correct predictive values you should use the pre-test probability (prevalence) of case been truly positive before the test is performed.
To show positive & negative predictive values:
The positive and negative predictive values are now shown in the table showing performance at each decision level. A Positive Predictive value (PV+) is probability of the case been truly positive given a positive test result, the Negative Predictive value (PV-) is the probability of the case been truly negative given a negative test result.
The cost of a diagnosis, whether in terms of financial cost or in terms of the cost to the health of the subject, can be taken into account when evaluating decision levels. For some diagnosis the cost of treating a subject may be high financially, or the impact of a treatment might be such that treating false-positive subjects should be minimised even at the risk of missing some positive cases. In other situations the treatment cost financially or to health might be minimal meaning a higher false-positive rate can be tolerated to catch more positive cases.
To specify costs of diagnosis:
The average cost of a diagnosis at each decision level is now shown in the table. The lowest cost indicates the best decision level minimizing the costs. If only the FP / FN costs were specified then the costs given are the cost of mistakes.
A decision plot can be shown to help in choosing optimum decision levels in terms of sensitivity (true positive rate) and specificity (true negative rate).
To show the sensitivity / specificity decision plot:
The decision plot (see below) shows the sensitivity (true positive rate) and specificity (true negative rate) (Y axis) over all decision levels (X axis). The point where the lines cross is the optimum decision level, where the maximum number of cases are correctly diagnosed as positive/negative. From the plot you may choose a decision level that correctly identifies a given proportion of the true positive or true negative cases.
A decision plot can be shown to help in choosing optimum decision levels in terms of likelihood ratios.
To show the likelihood ratio decision plot:
The decision plot (see below) shows the positive (Y axis 1) and negative likelihood ratios (Y axis 2) over all decision levels (X axis). The negative likelihood ratio TODO
A decision plot can be shown to help in choosing optimum decision levels in terms of predictive values.
To show the predictive value decision plot:
The decision plot (see below) shows the positive and negative predictive values (Y axes) over all decision levels (X axis). The point where the lines cross is the optimum decision level, where the maximum number of cases are correctly diagnosed as positive/negative.
A decision plot can be shown to help in choosing optimum decision levels in terms of costs. Costs can be expressed in financial terms or in terms of the cost to the health of the subject (see Costs above).
To show the costs decision plot:
The decision plot (see below) shows the cost (Y axis) over all decision levels (X axis). If costs were expressed in financial terms the cost shows the average financial cost of each diagnosis, including the cost to treat positive cases (if the Cost TP & Cost FP include treatment cost). If relative costs are specified, for example indicating the relative damage to the health of the subject, the lowest cost indicates the best decision level.