Tested against the industry-recognised NIST StRD, Analyse-it performed consistently amongst the best and performed better than some of the more popular well-known statistical packages.
Download Evaluating the Numerical Accuracy of Analyse-it (.pdf)
Download Analyse-it NIST StRD Validation workbooks (.zip)
Our development & validation process
In response to industry concerns about the numerical accuracy of statistical software, the Statistical Engineering and Mathematical and Computational Sciences Divisions of NIST’s Information Technology Laboratory developed datasets with certified values for a variety of statistical methods.
For more information about the datasets see:
The results obtained from statistical software packages can be compared against the certified values. The certified values are accurate to 15 significant digits and computed using ultra-high precision floating point arithmetic.
Most statistical packages use IEEE754 double precision (64bit) floating point arithmetic and due to finite precision, round-off and truncation errors involved in numerical operations, will be unable to obtain the exact certified value. Therefore, a good measure of the accuracy of a result x against the certified value c, can be calculated as the base-10 logarithm of the absolute value of the relative error:
LRE = -log10 (|x - c| / c)
if c ≠ 0, otherwise,
LRE = -log10 |x|
LRE is the number of significant digits in common with the certified value. Higher LRE values are better, and the maximum LRE obtainable is 15.
We tested version 4.00 of Analyse-it using the NIST StRD on an Intel Xeon dual processor PC. No statistical package achieves perfect accuracy for all the tests and no one package performs best for every test. In the tests:
Some developers of popular statistical software packages have published their own benchmarks against the NIST StRD, and some independent authors have also published benchmarks, see:
To download the Excel worksheets containing the Analyse-it analyses to perform the NIST StRD, and see comparisons against the published results for other packages, see:
The LRE obtained testing Analyse-it against the NIST StRD are summarized below.
The univariate tests consist of nine datasets classified by difficulty.
The mean and standard deviation were computed using the Distribution analysis and compared to the certified values.
The lag-1 autocorrelation is not computed by Analyse-it.
The analysis of variance tests consist of eleven datasets classified by difficulty.
The F statistic was computed using the Compare Groups – ANOVA analysis and compared to the certified value.
*Average/Minimum calculated excluding the marked tests.
NOTE: No statistical package has performed the Simon-Lesage tests 7, 8, and 9 (marked *) with more than 4.6 digits of accuracy, and although these programs reported 4.6 digits of accuracy for test 7 they performed markedly worse for tests 8 and 9. This is due to a flaw in the tests themselves rather than in the software packages, as the number 1,000,000,000,000.4 cannot be represented precisely using binary IEEE754 64bit double floating point representation. Instead it is represented as 1,000,000,000,000.4000244140625. Even simple summation of such a series of numbers leads to inaccuracy.
The linear regression tests consist of eleven datasets classified by difficulty.
The beta coefficients and standard error of the coefficients were computed using the Fit Model analysis and compared to the certified values. The minimum LRE value is reported for each analysis.
The R2 statistic was computed and compared to the certified value. No R2 value is computed for the no intercept models due to issues interpreting such a statistic.
The residual sum of square was computed and compared to the certified value.
NOTE: There may be slight variation in the LRE on repeated runs due to the use of available multiple processor cores when computing the QR decomposition of the matrix.