You are viewing documentation for the old version 2.20 of Analyse-it. If you are using version 3.00 or later we recommend you go to the 1-way ANOVA.
ANALYSE-IT 2.20 > USER GUIDE

1-way ANOVA

This procedure is available in both the Analyse-it Standard and the Analyse-it Method Evaluation edition

1-way ANOVA is a test for a difference in central location (mean) between two or more independent samples.

The requirements of the test are:

  • Two or more independent samples measured on a continuous scale.
  • Samples are from a population with a normal distribution and must have the same variance, also known as homogeneity of variance.


Arranging the dataset

Data in existing Excel worksheets can be used and should be arranged in a List dataset layout or Table dataset layout. The dataset must contain a continuous scale variable and a nominal/ordinal scale variable containing two or more independent groups.

When entering new data we recommend using New Dataset to create a new 2 variables (1 categorical) dataset ready for data entry.

Using the test

To start the test:

  1. Excel 2007:
    Select any cell in the range containing the dataset to analyse, then click Compare Groups on the Analyse-it tab, then click 1-way ANOVA.
  2. Excel 97, 2000, 2002 & 2003:
    Select any cell in the range containing the dataset to analyse, then click Analyse on the Analyse-it toolbar, click Compare Groups then click 1-way ANOVA.

  3. If the dataset is arranged using the table layout:
    Tick samples to compare in Variable - Groups.

    If the dataset is arranged using the list layout:
    Click Variable and select the variables to compare, then click Factor and select the independent variable containing the groups to compare.

  4. Click OK to run the test.

The report shows the number of observations analysed, and, if applicable, how many missing values were excluded. Summary statistics, including pooled standard error, are shown for each sample.

METHOD The pooled standard error is calculated from the pooled variance which is based upon all the observations and so if a better estimate than the variance calculated separately for each sample.

An analysis of variance table is shown which partitions the total variance into components between and within the samples (residual or error variance). The between- and within- sample variances are compared with an F-test to determine if they are different. The p-value is the probability of rejecting the null hypothesis, that the samples have the same mean, when it is in fact true. A significant p-value implies that at least two samples have different means. To determine which samples differ perform multiple comparisons.

A simpler way of understanding how the table relates to the hypothesis of testing for a difference in means is that the total variation is the variance when a model is fitted with a common mean for all the samples, the residual variation is the variance when a model is fitted to the mean of each sample. Therefore the between variation is the difference between these two models, the increase in variance by fitting of the model with a common mean.

Comparing groups with multiple comparisons

Multiple comparisons allow pairs of samples to be compared to determine which have different means.

When a hypothesis test is performed the probability of rejecting the null hypothesis, when it is in fact true is given by the p-value. It is usual to declare the p-value significant when it is smaller than a say an alpha of 0.05. If multiple hypothesis tests are performed with an alpha of 0.05 then the more tests are performed the more chance of rejecting at least one null hypothesis. It is often desirable to control the overall probability of making a false rejection of the null hypothesis at a given level for all the comparisons. To control the overall type I error various methods are available:

  • Tukey, recommended when comparing all pairs.
  • Dunnett, recommended when comparing against a control.
  • Scheffe, useful if planning on performing more than just pairwise or against a control comparisons. Overly conservative.
  • Bonferroni, equivalent to performing t-tests on each pair of groups, except it adjusts for the number of comparisons to control the overall type I error. Conservative.
  • LSD is equivalent to performing t-tests on each pair of groups, and offers no control of the overall type I error. It should only be used if the ANOVA p-value is significant.

To calculate multiple comparisons:

  1. If the 1-way ANOVA dialog box is not visible click Edit on the Analyse-it tab/toolbar.
  2. Click Contrasts then select All pairwise to compare each group against each other, or select Many against control to compare each group against a control group.
  3. Click Error protection then select Tukey, Dunnett, Scheffe, Bonferroni or LSD.
  4. If contrasting Many against a control, click Control group then select the group to use as the control against which all other groups will be contrasted.
  5. Enter Confidence interval, as percentage between 50 and 100 excluding the % sign, for the confidence interval around the difference between the groups.
  6. Click OK.

The samples compared, difference between the means and the confidence interval are shown. If the confidence interval does not span zero then the difference is significant.

To disable multiple comparisons:

  1. If the 1-way ANOVA dialog box is not visible click Edit on the Analyse-it tab/toolbar.
  2. Click Contrasts then select None.
  3. Click OK.

References to further reading

  1. Handbook of Parametric and Nonparametric Statistical Procedures(3rd edition)
    David J. Sheskin, ISBN 1-58488-440-1 2003; 667.
  2. Designing Experiments and Analyzing Data
    Maxwell S.E., Delaney H.D. ISBN 0-534-10374-X 1989.