- Statistical Reference Guide
- Distribution
- Continuous distributions
- Inferences about distribution parameters

# Hypothesis testing

Hypothesis testing is the formal process of making inferences from a sample whether or not a statement about the population appears to be true.

A hypothesis test is a method of making decisions. You must state a null hypothesis and an alternative hypothesis to perform a hypothesis test. The null hypothesis states what the study is intending to reject and disprove. The alternative hypothesis is usually the negation of the null and states what the study is trying to prove.

When the hypotheses have been stated a statistical test calculates a test statistic and p-value. The p-value is the probability of obtaining a test statistic at least as extreme as that observed when the null hypothesis is true. It is a measure of evidence against the null hypothesis. When the p-value is small, the data are unlikely to have occurred if the null hypothesis is true so you can reject the null hypothesis and accept the alternative hypothesis. When the p-value is large you cannot reject the null hypothesis; there is insufficient evidence against it. It is not possible to prove the null hypothesis, only disprove it. The p-value does not allow you to make any statements about the probability of the null hypothesis been true, it is a statement based on the observing the data given the null hypothesis is true.

Often a fixed significance level (denoted by the lower case greek symbol alpha) is used to decide whether the test is statistically significant or not. The significance level is the probability of rejecting the null hypothesis when it is true. When the p-value is less than the significance level, you can declare the test statistically significant. A 5% significance level is typical, which implies there is a 5% chance of wrongly rejecting the null hypothesis when in fact it is true. If more certainty is required, use a 1% significance level. Regardless, you should always report the p-value rather than just a statement of statistically significant, or not.

It is important to remember that a statistically significant test does not imply practically important. The difference might be so small as to be practically useless even though it was statistically significant. Alternatively, the sample size may have been so small that a hypothesis test was not powerful enough to detect anything but a huge difference as statistically significant. It is, therefore, essential that you always interpret the p-value together with a point and interval estimate of the parameter or effect size.

**Available in Analyse-it Editions**

Standard edition

Method Validation edition

Quality Control & Improvement edition

Ultimate edition

- What is Analyse-it?
- What's new?
- Administrator's Guide
- User's Guide
- Statistical Reference Guide
- Distribution
- Continuous distributions
- Univariate descriptive statistics
- Calculating univariate descriptive statistics
- Univariate plot
- Creating a univariate plot
- Frequency distribution
- Normality
- Inferences about distribution parameters
- Point and interval estimation
- Parameter estimate
- Estimators for the central location parameter of a distribution
- Estimators for the dispersion parameter of a distribution
- Estimating the parameters of a distribution
- Hypothesis testing
- Parameter hypothesis test
- Parameter equivalence hypothesis test
- Tests for the central location parameter of a distribution
- Tests for the dispersion parameter of a distribution
- Testing distribution parameters
- Testing equivalence of distribution parameters
- Discrete distributions
- Study design
- Compare groups
- Compare pairs
- Contingency tables
- Correlation and association
- Principal component analysis (PCA)
- Factor analysis (FA)
- Item reliability
- Fit model
- Method comparison
- Measurement systems analysis (MSA)
- Reference interval
- Diagnostic performance
- Control charts
- Process capability
- Pareto analysis
- Study Designs
- Bibliography

Version 5.60

Published 27-Apr-2020