Recently we’ve been busy updating Analyse-it to stay aligned with the latest updates to the CLSI protocols, and added a new inverse prediction feature.
If you have you can download and install the update now, see or visit the . If maintenance on your license has expired you can renew it to get this update and forthcoming updates, see .
New CLSI EP6-Ed2
The CLSI recently released guideline , which replaces the EP06-A published in 2003.
EP06-A relied on fitting a linear (straight line), 2nd (parabolic) and 3rd (sigmoidal) order polynomials to the data. A method was then determined to be linear or possibly non-linear based on statistical criteria. The degree of nonlinearity was then calculated as the difference between the linear fit and the best fitting non-linear model (parabolic or sigmoidal curves). Nonlinearity could then be compared against allowable nonlinearity criteria.
The new CLSI EP6-Ed2 protocol no longer requires fitting polynomial models to determine linearity. Instead, the deviation from linearity is calculated as the difference between the mean of each level and a linear fit through the data. That can then be compared against the allowable nonlinearity criteria. Other changes to the protocol include experimental design and there is now more focus on the structure of the variance across the measuring interval.
Recent improvements to the , in version 5.50 and later, include the addition of probit regression. Probit regression is useful when establishing the detection limit (LoD) for an RT-qPCR assay.
The protocol provides guidance for estimating LoD and is recognized by the FDA. In this blog post, we will look at how to perform the relevant part of the CLSI EP17-A2 protocol using Analyse-it.
For details on experimental design, see section 5.5 in the CLSI EP17-A2 guideline. In Analyse-it, you should arrange the data in 2 columns: the first should be the concentration, and the second should be the result, positive or negative. You should have a minimum of 20 replicates at each concentration. We have put together a hypothetical example in the workbook which you can use the follow the steps below:
The analysis task pane opens.
NOTE: If using Analyse-it pre-version 5.65, on the Fit panel, in the Predict X given Probability edit box, type 0.95.
Following our last blog post, today, we will show how to calculate binary agreement using the . The protocol is a useful companion resource for laboratories and diagnostic companies developing qualitative diagnostic tests.
In Analyse-it, you should arrange the data in frequency or case form, as discussed in the blog post: . You can find an example of both and follow the steps below, using the workbook .
NOTE: The Average method is useful when comparing two laboratories or observers where neither is considered a natural comparator. The reference method is asymmetric, and the result will depend on the assignment of the X and Y methods, whereas the average method is symmetric, and the result does not change when swapping the X and Y methods.
INFO: Older versions of Analyse-it do not support the Average method, and the Agreement by category checkbox is called Agreement.
The analysis report shows positive and negative agreement statistics.
Due to COVID-19, there is currently a lot of interest surrounding the sensitivity and specificity of a diagnostic test. These terms relate to the accuracy of a test in diagnosing an illness or condition. To calculate these statistics, the true state of the subject, whether the subject does have the illness or condition, must be known.
In recent FDA guidance for laboratories and manufacturers, , the FDA state that users should use a clinical agreement study to establish performance characteristics (sensitivity/PPA, specificity/NPA). While the terms sensitivity/specificity are widely known and used, the terms PPA/NPA are not.
protocol describes the terms positive percent agreement (PPA) and negative percent agreement (NPA). When you have two binary diagnostic tests to compare, you can use an agreement study to calculate these statistics.
As you can see, these measures are asymmetric. That is, interchanging the test and comparative methods, and therefore the values of b and c, changes the statistics. They do, however, have a natural, simple, interpretation when one method is a reference/comparative method and the other a test method.
It is important in diagnostic accuracy studies that the true clinical state of the patient is known. For example, in developing a SARS-CoV-2 anti-body test, for the positive subgroup, you might enlist subjects who had a positive SARS-CoV-2 PCR test and clinically confirmed illness. Then, for the negative subgroup, you might use samples taken from subjects before the illness was in circulation. It is also essential to consider other factors, such as the severity of illness, as they can have a marked effect on the performance characteristics of the test. A test that shows high sensitivity/specificity in a hospital situation in very ill patients can be much less effective in population screening where the severity of the illness is less.
In cases where the true condition of the subject is not known, and only results from a comparative method and a new test method are available, an agreement measure is more suitable. We will cover that scenario in detail in a future blog post.
In our last post, we mentioned that the 'accuracy' statistic, also known as the probability of a correct result, was a useless measure for diagnostic test performance. Today we'll explain why.
Let's take a hypothetical test with a sensitivity of 86% and specificity of 98%.
As a first scenario we simulated test results on 200 subjects with, and 200 without, the condition. The accuracy statistic (TP+TN)/N is equal to (172+196)/400 = 92%. See below:
In a second scenario we again simulated test results on 400 subjects, but only 50 with, and 350 without, the condition. The accuracy statistic is (43+343)/400 = 96.5%. See below:
The accuracy statistic is effectively a weighted average of sensitivity and specificity, with weights equal to the sample prevalence P(D=1) and the complement of the prevalence (that is, P(D=0) = 1-P(D=1)).
Accuracy = P(TP or TN) = (TP+TN)/N = Sensitivity * P(D=1) + Specificity * P(D=0)
Therefore as the prevalence in the sample changes so does the statistic. The prevalence of the condition in the sample may vary due to the availability of subjects or it may be fixed during the design of the study. It's easy to see how to manipulate the accuracy statistic to weigh in favor of the measure that performs best.
There’s currently a lot of press attention surrounding the finger-prick antibody IgG/IgM strip test to detect if a person has had COVID-19. Here in the UK companies are buying them to test their staff, and some in the media are asking why the government hasn’t made millions of tests available to find out who has had the illness and could potentially get back to work.
We did a quick Google search, and there are many similar-looking test kits for sale. The performance claims on some were sketchy, with some using as few as 20 samples to determine their performance claim! However, we found a webpage for a COVID-19 IgG/IgM Rapid antibody test that used a total of 525 cases, with 397 positives, 128 negatives, clinically confirmed. We have no insight as to the reliability of the claims made in the product information. The purpose of this blog post is not to promote or denigrate any test but to illustrate how to look further than headline figures.
We ran the data through the version 5.51. Here's the workbook containing the analysis:
Prediction intervals on Deming regression are a major new feature in the Analyse-it Method Validation Edition version 4.90, just released.
A prediction interval is an interval that has a given probability of including a future observation(s). They are very useful in method validation for testing the commutability of reference materials or processed samples with patient samples. Two CLSI protocols, and both use prediction intervals.
We will illustrate this new feature using an example from CLSI EP14-A3:
1) Open the workbook .
2) On the Analyse-it ribbon tab, in the Statistical Analysis group, click Method Comparison and then click Ordinary Deming regression.
3) In the X (Reference / Comparative) drop-down list, select Cholesterol: A.
4) In the Y (Test / New) drop-down list, select Cholesterol: B.
5) On the Analyse-it ribbon tab, in the Method Comparison group, click Restrict to Group.
Today we released version 4.60 of the Analyse-it Method Validation edition.
The new release now includes 3 nested-factor precision analysis, which extends Analyse-it’s support for CLSI EP05-A3 multi-laboratory precision studies.
If you have you can download and install the update now, see . If maintenance on your license has expired you can renew it to get this update and forthcoming updates, see .
Today we released version 4.0 of the Analyse-it Method Validation edition. This is a major new release with many new features and improvements.
The latest release of the Analyse-it Method Validation edition now supports 10 of the latest CLSI evaluation protocol (EP) guidelines. guidelines are world-renowned and are recognized by the College of American Pathologists (CAP), The Joint Commission, and the US Food and Drug Administration (FDA).
Analyse-it has been a driving force in the adoption of statistics in method validation for over 15 years, has influenced many recommendations, and is the only software available with such extensive coverage for the latest CLSI guidelines.
CLSI guidelines supported in version 4.0 include:
Measurement Systems Analysis (MSA) is a new feature in version 4.0. MSA unifies precision and linearity, which were available in earlier versions of Analyse-it, but also includes trueness (bias) and detection capability so you can establish the limit of blank (LoB) and limit of detection (LoD). The unification of these analyses in MSA lets you dig deep to examine and understand the performance characteristics of a measurement procedure.
The recent of passing of Professor Rick Jones (see ) caused me to reflect on the past.
I was very fortunate to earn a work placement with Dr Rick Jones at The University of Leeds in the summer of 1990. Rick was enthusiastic about the role of IT in medicine, and after securing funding for a full-time position he employed me as a computer programmer. Early projects included software for automating the monitoring of various blood marker tests and software to diagnose Down’s syndrome. At the time many hospitals had in-house solutions for diagnosing Down’s syndrome, and although the project took many years and the help of many other people to complete, it eventually gained widespread adoption.
Around 1992, Rick came up with the idea of a statistics package that integrated into Microsoft Excel. Armed with a ring bound folder containing the Excel SDK and a pile of medical statistics books, I set about the task of writing the software in C++. It wasn’t long before the first version of Astute was ready and commercially released.
If you you will no doubt already know about the recent improvements in the Analyse-it Method Validation edition and the release of our first video tutorial. If not, now is a good time to since we post short announcements and feature previews on Facebook, and use the blog only for news about major releases.
The latest changes and improvements to the Analyse-it Method Validation edition include:
If you have you can download and install the update now, see . If maintenance on your licence has expired you can renew it to get this update and forthcoming updates, see .
Finally, we are delighted to release our first video tutorial. The tutorial is the video equivalent of the tutorial above. It walks and talks you through using Analyse-it to determine the agreement between methods. Sit back and .
We intend to produce more video tutorials in future, so let us know what you think: what you like, dislike, and how we can improve them in future.
Today we released the Analyse-it Method Validation edition version 3.5. The software is feature complete, validated, and includes documentation. It supports Excel 2007, Excel 2010 (32- and 64-bit) and Excel 2013 (32- and 64-bit).
We took this opportunity to rename the product from the Analyse-it Method Evaluation edition to the Method Validation edition. The product is the same, but the new name better reflects the intended purpose of the product.
New features include:
Diagnostic performance / ROC
Binary diagnostic tests
For more information about the new version, and to download a free 30-day trial, see:
Pricing for the Analyse-it Method Validation edition starts at US$ 699 for a 1-user perpetual licence. If you already have a licence you may qualify for a free upgrade, if you have active maintenance, otherwise you can extend maintenance to get the upgrade (and all updates for 1- or 3-years) free of charge. To see if you qualify for a free upgrade, otherwise get a quote to extend maintenance, see:
Today we released the first public beta test version of the Analyse-it Method Evaluation edition, version 3.5. The software is feature complete and is validated – it is now only missing documentation.
We invite everyone to download the beta and try the new version of the software before it is finally released in September. You will need Excel 2007, 2010, or 2013 (32-bit and 64-bit versions are supported) and it can be installed and used alongside older versions of Analyse-it so it won't interrupt your day-to-day work.
To download the beta version:
To activate the software use the product key:
The software will be publically released at the end of September 2013.
If you purchased a licence in the last 12 months, the 12 months of maintenance included means you will qualify for a free upgrade to the new version.
If you are outside the 12 month free upgrade period you can purchase 12 months of maintenance, to get the upgrade (and all updates in the following year), for 20% of the cost of your licence. For example, if you have a 1-user licence then the upgrade will cost 20% of the cost of a 1-user licence. Similarly if you have a 3-user licence the upgrade cost would be 20% of the cost of a 3-user licence.
Today we released the 3rd alpha release of the Analyse-it Method Evaluation Edition 3.5. Alpha releases are versions of the software that are still in active development, but are released to small group of customers so we can identify and fix any problems before the public beta release.
This release now completes the package with method comparison, which includes Deming regression, Passing-Bablok regression, and Bland-Altman difference plots. Linearity, precision analysis, diagnostic performance (ROC analysis and binary test performance) and reference intervals were already included in earlier alpha releases.
If you would like to take part in this and subsequent test phases reply to this post or . The test releases will run alongside any existing version of Analyse-it, so your day-to-day work won't be interrupted or affected. And those who help during testing will receive a discount on the upgrade (a free upgrade for those who contribute the most) when the product is released later this year.
Today we released the 2nd alpha of the Analyse-it Method Evaluation Edition 3.5.
Alpha releases are pre-release versions of the software that are still in active development. We release them to a small group of customers so we can get feedback and quickly identify and fix any problems before the public beta release. If you want to take part in the test phase reply or comment on to this post or . You can use pre-release versions of Analyse-it alongside your
existing version of Analyse-it, so it won't disrupt your work. And, if you help during in the test phases you will get a discount on the upgrade (a free upgrade for those who contribute the most) when the product is released later this year.
This latest alpha release includes linearity and precision analysis, plus diagnostic test performance (ROC analysis and binary test performance) and reference intervals from the 1st alpha.
Some of the new features included so far are:
We are now starting to release test previews of a major update to the Analyse-it Method Evaluation edition. The new release will include many new features (we'll reveal more in the coming weeks) and will support 32- and 64-bit versions of Excel 2007, 2010, and 2013.
During the initial test phases we release development versions of the application to a small group of customers to ensure it installs and runs as expected on a wide range of PCs and configurations. The official beta test phase stage then follows where more customers are invited to download and use the software, while we iron-out the final few bugs before the official release. The official release is planned for summer 2013.
If you want to take part in the test phase, reply to this post or and let us know what aspects of Analyse-it you use:
Analytical Linearity, Precision, Accuracy,
Diagnostic performance (ROC, binary test performance),
Today we’re delighted to publish the second case study into the use of Analyse-it.
The case study features a national clinical laboratory in the USA that offers more than 2,000 tests and combinations to major commercial and government laboratories. They use Analyse-it to determine analytical performance of automated immunoassays for some of the industry’s leading in-vitro diagnostic device makers -- including Abbott Diagnostics, Bayer Diagnostics, Beckman Coulter and Roche Diagnostics.
Unfortunately we cannot name the end-user, or the organisation she works for, in the case study. Although she was delighted to feature in the case study, at final approval her organisation's committee preferred the names be withheld. Thankfully they have allowed us to use the case study, albeit anonymously.
You can online now or download the .
We would love to feature more customer stories in case studies. If you can get approval to participate – which we realise is very difficult in many industries – and have 20 minutes to spare for a telephone interview, please contact us at .
Today we’re delighted to publish the first case study into the use of Analyse-it.
Marco Balerna Ph.D., a Clinical Chemist at the in Switzerland, used Analyse-it when replacing the clinical chemistry and immunological analysers in EOC’s laboratories.
Since the EOC provides clinical chemistry services to five large hospitals and three small clinics in the region, it was essential the transition to the new analysers went smoothly. Marco used Analyse-it to ensure the analyser’s performance met the manufacturer’s claims, to ensure the reporting of patient results was not affected, and to comply with the regulations of the EOC’s accreditation.
Overall the project involved comparing performance for 110-115 parameters, comprising over 25,600 measurements with control materials and patient samples.
Marco was so impressed with Analyse-it and the time he saved, that he was very enthusiastic when we asked if we could feature his story in a case study. We would like to publically thank Marco for his co-operation in the case study. Grazie Marco! Salute!