An F-test formally tests whether a term contributes to the model.
In most modeling analyses the aim is a model that describes the relationship using as few terms as possible. It is therefore of interest to look at each term in the model to decide if the term is providing any useful information.
An analysis of variance table shows the reduction in error by including each term. An F-test for each term is a formal hypothesis test to determine if the term provides useful information to the model. The null hypothesis states that the term does not contribute to the model, against the alternative hypothesis that it does. When the p-value is small, you can reject the null hypothesis and conclude that the term does contribute to the model.
When a term is not deemed to contribute statistically to the model, you may consider removing it. However, you should be cautious of removing terms that are known to contribute by some underlying mechanism, regardless of the statistical significance of a hypothesis test, and recognize that removing a term can alter the effect of other terms.
You should choose the type of F-test based on what other terms you want the test to consider.
||Measures the effect of the term adjusted for all other terms in the model. The sum of squares in the ANOVA table is known as the Type III sum of squares.
A partial F-test is equivalent to testing if all the parameter estimates for the term are equal to zero.
||Measures the effect of the term adjusting only for the previous terms included in the model. The sum of squares in the ANOVA table is known as the Type I sum of squares.
A sequential F-test is often useful when fitting a polynomial regression.
Note: The squared t-statistic for a coefficient t-test is equivalent to the F statistic when using the partial F-test. The t-test is not suitable when the model includes categorical variables coded as dummy predictor variables as each term consists multiple coefficient t-tests.