We've been developing intuitive high-quality statistical software at an affordable price, backed up by fast friendly customer service for over 25 years...
Office hours: 9am-5pm GMT Mon-Fri
It's Tuesday 5:02 PM and the office is closed - e-mail us and we'll get back to you as soon as possible.
1-way ANOVA is a test for a difference in central location (mean) between two or more independent samples.
The requirements of the test are:
Data in existing Excel worksheets can be used and should be arranged in a List dataset layout or Table dataset layout. The dataset must contain a continuous scale variable and a nominal/ordinal scale variable containing two or more independent groups.
When entering new data we recommend using New Dataset to create a new 2 variables (1 categorical) dataset ready for data entry.
To start the test:
Excel 97, 2000, 2002 & 2003:
Select any cell in the range containing the dataset to analyse, then click Analyse on the Analyse-it toolbar, click Compare Groups then click 1-way ANOVA.
If the dataset is arranged using the list layout:Click Variable and select the variables to compare, then click Factor and select the independent variable containing the groups to compare.
The report shows the number of observations analysed, and, if applicable, how many missing values were excluded. Summary statistics, including pooled standard error, are shown for each sample.
METHOD The pooled standard error is calculated from the pooled variance which is based upon all the observations and so if a better estimate than the variance calculated separately for each sample.
An analysis of variance table is shown which partitions the total variance into components between and within the samples (residual or error variance). The between- and within- sample variances are compared with an F-test to determine if they are different. The p-value is the probability of rejecting the null hypothesis, that the samples have the same mean, when it is in fact true. A significant p-value implies that at least two samples have different means. To determine which samples differ perform multiple comparisons.
A simpler way of understanding how the table relates to the hypothesis of testing for a difference in means is that the total variation is the variance when a model is fitted with a common mean for all the samples, the residual variation is the variance when a model is fitted to the mean of each sample. Therefore the between variation is the difference between these two models, the increase in variance by fitting of the model with a common mean.
Multiple comparisons allow pairs of samples to be compared to determine which have different means.
When a hypothesis test is performed the probability of rejecting the null hypothesis, when it is in fact true is given by the p-value. It is usual to declare the p-value significant when it is smaller than a say an alpha of 0.05. If multiple hypothesis tests are performed with an alpha of 0.05 then the more tests are performed the more chance of rejecting at least one null hypothesis. It is often desirable to control the overall probability of making a false rejection of the null hypothesis at a given level for all the comparisons. To control the overall type I error various methods are available:
To calculate multiple comparisons:
The samples compared, difference between the means and the confidence interval are shown. If the confidence interval does not span zero then the difference is significant.
To disable multiple comparisons: