You are viewing documentation for the old version 2.20 of Analyse-it. If you are using version 3.00 or later we recommend you go to the latest documentation.
ANALYSE-IT 2.20 > USER GUIDE

2-way ANOVA

This procedure is available in both the Analyse-it Standard and the Analyse-it Method Evaluation edition

2-way ANOVA, or more technically 2-way between-subjects factor ANOVA, is a test for a difference in central location (mean) in a sample classified by two factors.

The requirements of the test are:

  • A sample measured on a continuous scale, observed for two ordinal or nominal scale factors.
  • Samples are from a population with a normal distribution and must have the same variance, also known as homogeneity of variance.


Arranging the dataset

Data in existing Excel worksheets can be used and should be arranged in a List dataset layout . The dataset must contain a continuous scale variable and two nominal/ordinal scale variables (factors).

When entering new data we recommend using New Dataset to create a new 3 variables (2 categorical) dataset ready for data entry.

Using the test

 To start the test:

  1. Excel 2007:
    Select any cell in the range containing the dataset to analyse, then click Compare Groups on the Analyse-it tab, then click 2-way ANOVA.
  2. Excel 97, 2000, 2002 & 2003:
    Select any cell in the range containing the dataset to analyse, then click Analyse on the Analyse-it toolbar, click Compare Groups then click 2-way ANOVA.

  3. If the dataset is arranged using the 2-way table layout
    Click Factor A and Factor B and select the variables to use.

    If the dataset is arranged using the list layout
    Click Factor A and Factor B and select the independent variables (factors) containing the groups, then click Variable and select the dependent variable.

  4. Tick Include Interaction term if the interaction between the factors should be tested. If you are sure there is no interaction, clear the checkbox.
  5. Click OK to run the test.

The report shows the number of observations analysed, and, if applicable, how many missing values were excluded. Summary statistics, including pooled standard error, for the sample classified by each factor are then shown.

METHOD The pooled standard error for each sample is calculated from the pooled variance. The pooled variance is based upon all the observations and so if a better estimate than the variance calculated separately for each group. However, it is important that the equal variance requirement of the test mentioned above is met.

An analysis of variance table is shown which partitions the total variance into components due to variance between the groups of each factor individually, due to interaction effects, and variance within the samples (residual or error variance). If Include interaction term was clear, excluding the possibility of an interaction between the factors, the variance for the interaction is included in the residual/error variance.

F-tests are used to compare the between- and within- sample variances to determine if they are different. A hypothesis test is shown which tests the hypothesis that the samples have different means. The p-value shows the probability of rejecting the null hypothesis, that the samples have the same mean, when it is in fact true. A significant p-value implies that at least two samples have different means. If the interaction term is included, a significant p-value implies there is an interaction between the factors affecting the means, i.e. the means within the groups of one factor are different across the groups of the other factor (and vice-versa). In such a case it is difficult to interpret the main effects.

References to further reading

  1. Handbook of Parametric and Nonparametric Statistical Procedures (3rd edition)
    David J. Sheskin, ISBN 1-58488-440-1 2003; 887.
  2. Designing Experiments and Analyzing Data
    Maxwell S.E., Delaney H.D. ISBN 0-534-10374-X 1989.