# 1-way ANOVA

This procedure is available in both the Analyse-it Standard and the Analyse-it Method Evaluation edition

1-way ANOVA is a test for a difference in central location (mean) between two or more independent samples.

The requirements of the test are:

- Two or more independent samples measured on a continuous scale.
- Samples are from a population with a normal distribution and must have the same variance, also known as homogeneity of variance.

- Arranging the dataset
- Using the test
- Comparing groups with multiple comparisons
- References to further reading

## Arranging the dataset

Data in existing Excel worksheets can be used and should be arranged in a List dataset layout or Table dataset layout. The dataset must contain a continuous scale variable and a nominal/ordinal scale variable containing two or more independent groups.

When entering new data we recommend using New Dataset to create a new **2 variables (1 categorical)** dataset ready for data entry.

**Using the test **

To start the test:

- Excel 2007:

Select any cell in the range containing the dataset to analyse, then click**Compare Groups**on the**Analyse-it**tab, then click**1-way ANOVA**. - If the dataset is arranged using the table layout:

Tick samples to compare in**Variable - Groups**.If the dataset is arranged using the list layout:

Click**Variable**and select the variables to compare, then click**Factor**and select the independent variable containing the groups to compare. - Click
**OK**to run the test.

Excel 97, 2000, 2002 & 2003:

Select any cell in the range containing the dataset to analyse, then click **Analyse **on the **Analyse-it **toolbar, click **Compare Groups** then click **1-way ANOVA**.

The report shows the number of observations analysed, and, if applicable, how many missing values were excluded. Summary statistics, including pooled standard error, are shown for each sample.

** METHOD ** The pooled standard error is calculated from the pooled variance which is based upon all the observations and so if a better estimate than the variance calculated separately for each sample.

An analysis of variance table is shown which partitions the total variance into components between and within the samples (residual or error variance). The between- and within- sample variances are compared with an F-test to determine if they are different. The *p*-value is the probability of rejecting the null hypothesis, that the samples have the same mean, when it is in fact true. A significant p-value implies that at least two samples have different means. To determine which samples differ perform multiple comparisons.

A simpler way of understanding how the table relates to the hypothesis of testing for a difference in means is that the total variation is the variance when a model is fitted with a common mean for all the samples, the residual variation is the variance when a model is fitted to the mean of each sample. Therefore the between variation is the difference between these two models, the increase in variance by fitting of the model with a common mean.

## Comparing groups with multiple comparisons

**Multiple comparisons** allow pairs of samples to be compared to determine which have different means.

When a hypothesis test is performed the probability of rejecting the null hypothesis, when it is in fact true is given by the p-value. It is usual to declare the p-value significant when it is smaller than a say an alpha of 0.05. If multiple hypothesis tests are performed with an alpha of 0.05 then the more tests are performed the more chance of rejecting at least one null hypothesis. It is often desirable to control the overall probability of making a false rejection of the null hypothesis at a given level for all the comparisons. To control the overall type I error various methods are available:

*Tukey*, recommended when comparing all pairs.*Dunnett*, recommended when comparing against a control.*Scheffe*, useful if planning on performing more than just pairwise or against a control comparisons. Overly conservative.*Bonferroni*, equivalent to performing t-tests on each pair of groups, except it adjusts for the number of comparisons to control the overall type I error. Conservative.*LSD*is equivalent to performing t-tests on each pair of groups, and offers no control of the overall type I error. It should only be used if the ANOVA p-value is significant.

To calculate multiple comparisons:

- If the 1-way ANOVA dialog box is not visible click
**Edit**on the**Analyse-it**tab/toolbar. - Click
**Contrasts**then select**All pairwise**to compare each group against each other, or select**Many against control**to compare each group against a control group. - Click
**Error protection**then select**Tukey, Dunnett, Scheffe, Bonferroni**or**LSD**. - If contrasting
**Many against a control**, click**Control group**then select the group to use as the control against which all other groups will be contrasted. **Enter**, as percentage between 50 and 100 excluding the % sign,**Confidence interval****for the confidence interval around the difference between the groups.**- Click
**OK**.

The samples compared, difference between the means and the confidence interval are shown. If the confidence interval does not span zero then the difference is significant.

To disable multiple comparisons:

- If the 1-way ANOVA dialog box is not visible click
**Edit**on the**Analyse-it**tab/toolbar. - Click
**Contrasts**then select**None**. - Click
**OK**.

## References to further reading

- Handbook of Parametric and Nonparametric Statistical Procedures(3rd edition)

David J. Sheskin, ISBN 1-58488-440-1 2003; 667. - Designing Experiments and Analyzing Data

Maxwell S.E., Delaney H.D. ISBN 0-534-10374-X 1989.

- Welcome
- Getting started
- What's new in this version
- Installing Analyse-it
- Starting Analyse-it
- Defining Datasets
- Setting Variable properties
- Running a statistical test
- Working with analysis reports
- Analyse-it Standard edition
- Describe
- Compare groups
- Summary statistics, Box/Dot/Mean plots
- Test Difference in Location
- Independent t-test
- Mann-Whitney test
- 1-way ANOVA
- 2-way ANOVA
- Kruskal-Wallis test
- Median test
- Test Difference in Dispersion
- Test Difference in Proportion
- Compare pairs
- Correlation
- Agreement
- Regression
- Analyse-it Method Evaluation edition
- Citing Analyse-it
- Contact us
- About us