Analyse-it blog The latest news on new features and software releases.

18-Aug-2017 Analyse-it 4.90 released: Prediction intervals

Prediction intervals on Deming regression are a major new feature in the Analyse-it Method Validation Edition version 4.90, just released.

A prediction interval is an interval that has a given probability of including a future observation(s). They are very useful in method validation for testing the commutability of reference materials or processed samples with patient samples. Two CLSI protocols, EP14-A3: Evaluation of Commutability of Processed Samples and EP30-A: Characterization and Qualification of Commutable Reference Materials for Laboratory Medicine both use prediction intervals.

We will illustrate this new feature using an example from CLSI EP14-A3:

1) Open the workbook EP14-A3.xlsx.

2) On the Analyse-it ribbon tab, in the Statistical Analysis group, click Method Comparison and then click Ordinary Deming regression.

The analysis task pane opens.

3) In the X (Reference / Comparative) drop-down list, select Cholesterol: A.

4) In the Y (Test / New) drop-down list, select Cholesterol: A.

5) On the Analyse-it ribbon tab, in the Method Comparison group, click Restrict to Group.

6) In the Group / Color / Symbol drop-down list, select Sample Type.

7) In the Restrict fit to group drop-down list, select Patient.

8) In the Prediction band edit box, type 95%.

NOTE: Select the Familywise coverage check box to control the probability of simultaneously for all additional samples rather than individually for each sample.

9) On the Descriptives task pane, select Label points, and choose Additional groups only.

10) Click Calculate.

The report shows the scatter plot with fitted regression line and 95% prediction interval (see image below). The regression line is only fitted to the points in the Patient group, as set in step 7 above, and additional points are colored depending on the type of sample, as set in step 6 above.

Any points outside the prediction band are not commutable with the patient samples, and in this case you can see sample ‘c’ is not commutable. The commutability table shows the additional samples and whether they are commutable or not with the patient samples.

The steps to perform an EP30 study are the same as described above. You should note that EP30 forms the prediction interval using the fit of the patient samples and the precision of the reference materials, where-as Analyse-it uses the fit and precision of the patient samples. We chose to implement it like this since there are usually too few reference material samples to establish a reliable estimate of the precision.

We have extended the prediction intervals beyond the CLSI EP guidelines, so they support any number of replicates and are also available with Ordinary and Weighted Deming regression. This alleviates the need to log transform values as is recommended in EP14, which, although it corrects the constant CV, distorts the relationship between the two methods.

If you have active maintenance you can download and install the update now, see updating the software. If maintenance on your license has expired you can renew it to get this update and forthcoming updates, see renew maintenance.

18-May-2017 Parameter estimation

Often we collect a sample of data not to make statements about that particular sample but to generalize our statements to say something about the population. Estimation is the process of making inferences about an unknown population parameter from a random sample drawn from the population of interest. An estimator is a method for arriving at an estimate of the value of an unknown parameter. Often there are many competing estimators for the population parameter that differ based on the underlying statistical theory.

Continue reading article  

11-Aug-2015 The numerical accuracy of Analyse-it against the NIST StRD

A critical feature of any analytical and statistical software is accuracy. You are making decisions based on the statistics obtained and you need to know you can rely on them.

We have documented our previously, but another good benchmark to test statistical software against is the NIST StRD. The Statistical Engineering and Mathematical and Computational Sciences Divisions of NIST’s Information Technology Laboratory developed datasets with certified values for a variety of statistical methods against which statistical software packages can be benchmarked. The certified values are computed using ultra-high precision floating point arithmetic and are accurate to 15 significant digits.

Continue reading article  
1 comment  

29-Oct-2014 A sombre note: Professor Rick Jones

The recent of passing of Professor Rick Jones (see ) caused me to reflect on the past.

I was very fortunate to earn a work placement with Dr Rick Jones at The University of Leeds in the summer of 1990. Rick was enthusiastic about the role of IT in medicine, and after securing funding for a full-time position he employed me as a computer programmer. Early projects included software for automating the monitoring of various blood marker tests and software to diagnose Down’s syndrome. At the time many hospitals had in-house solutions for diagnosing Down’s syndrome, and although the project took many years and the help of many other people to complete, it eventually gained widespread adoption.

Continue reading article  

18-Aug-2014 Analyse-it 3.80 released: Principal Component Analysis (PCA)

Today we released version 3.80 of the Analyse-it Standard edition.

The new release includes Principal Component Analysis (PCA), an extension to the multivariate analysis already available in Analyse-it. It also includes probably the most advanced implementation of biplots available in any commercial package.

New features include:

The tutorial walks you through a guided example looking at how to use correlation and principal component analysis to discover the underlying relationships in data about New York Neighbourhoods. It demonstrates the amazing new features and helps you understand how to use them. You can either follow the tutorial yourself, at your own pace, or .

Continue reading article  

18-Feb-2013 Quantiles, Percentiles: Why so many ways to calculate them?

What is a sample quantile or percentile? Take the 0.25 quantile (also known as the 25th percentile, or 1st quartile) -- it defines the value (let’s call it x) for a random variable, such that the probability that a random observation of the variable is less than x is 0.25 (25% chance).

A simple question, with a simple definition? The problem is calculating quantiles. The formulas are simple enough, but a take a quick look on Wikipedia and you’ll see there are at least 9 alternative methods . Consequently, statistical packages use different formulas to calculate quantiles. And we're sometimes asked why the quantiles calculated by Analyse-it sometimes don’t agree with Excel, SAS, or R.

Continue reading article  

6-Nov-2008 Normal quantile & probability plots

In a previous post, , we explained the tests provided in Analyse-it to determine if a sample has normal distribution. In that post, we mentioned that although hypothesis tests are useful you should not solely rely on them. You should always look at the histogram and, maybe more importantly, the normal plot.

The beauty of the normal plot is that it is designed specifically for judging normality. The plot is very easy to interpret and lets you see where the sample deviates from normality.

Continue reading article  
1 comment