# Parameter estimates

Parameter estimates (also called coefficients) are the change in the response associated with a one-unit change of the predictor, all other predictors being held constant.

The unknown model parameters are estimated using least-squares estimation.

A coefficient describes the size of the contribution of that predictor; a near-zero coefficient indicates that variable has little influence on the response. The sign of the coefficient indicates the direction of the relationship, although the sign can change if more terms are added to the model, so the interpretation is not particularly useful. A confidence interval expresses the uncertainty in the estimate, under the assumption of normally distributed errors. Due to the central limit theorem, violation of the normality assumption is not a problem if the sample size is moderate.

- For quantitative terms, the coefficient represents the rate of change in the response per 1 unit change of the predictor, assuming all other predictors are held constant. The units of measurement for the coefficient are the units of response per unit of the predictor.
For example, a coefficient for Height of 0.75, in a simple model for the response Weight (kg) with predictor Height (cm), could be expressed as 0.75

*kg per cm*which indicates a 0.75 kg weight increase per 1 cm in height.When a predictor is a logarithm transformation of the original variable, the coefficient is the rate of change in the response per 1 unit change in the log of the predictor. Commonly

*base 2*log and*base 10*log are used as transforms. For*base 2*log the coefficient can be interpreted as the rate of change in the response when for a doubling of the predictor value. For*base 10*log the coefficient can be interpreted as the rate of change in the response when the predictor is multiplied by 10, or as the % change in the response per % change in the predictor. - For categorical terms, there is a coefficient for each level:
- For nominal predictors the coefficients represent the difference between the level mean and the grand mean.
Analyse-it uses

*effect coding*for nominal terms (also known as the*mean deviation coding*). The sum of the parameter estimates for a categorical term using effect coding is equal to 0. - For ordinal predictors, the coefficients represent the difference between the level mean and the baseline mean.
Analyse-it uses

*reference coding*for ordinal terms. The first level is used as the baseline or reference level.

- For nominal predictors the coefficients represent the difference between the level mean and the grand mean.
- For the constant term, the coefficient is the response when all predictors are 0, and the units of measurement are the same as the response variable.

A standardized parameter estimate (commonly known as standardized beta coefficient) removes the unit of measurement of predictor and response variables. They represent the change in standard deviations of the response for 1 standard deviation change of the predictor. You can use them to compare the relative effects of predictors measured on different scales.

VIF, the variance inflation factor, represents the increase in the variance of the parameter estimate due to correlation (collinearity) between predictors. Collinearity between the predictors can lead to unstable parameter estimates. As a rule of thumb, VIF should be close to the minimum value of 1, indicating no collinearity. When VIF is greater than 5, there is high collinearity between predictors.

A t-test formally tests the null hypothesis that the parameter is equal to 0, against the alternative hypothesis that it is not equal to 0. When the p-value is small, you can reject the null hypothesis and conclude that the parameter is not equal to 0 and it does contribute to the model.

When a parameter is not deemed to contribute statistically to the model, you can consider removing it. However, you should be cautious of removing terms that are known to contribute by some underlying mechanism, regardless of the statistical significance of a hypothesis test, and recognize that removing a term can alter the effect of other terms.

**Available in Analyse-it Editions**

Standard edition

Method Validation edition

Quality Control & Improvement edition

Ultimate edition

- What is Analyse-it?
- What's new?
- Administrator's Guide
- User's Guide
- Statistical Reference Guide
- Distribution
- Compare groups
- Compare pairs
- Contingency tables
- Correlation and association
- Principal component analysis (PCA)
- Factor analysis (FA)
- Item reliability
- Fit model
- Linear fit
- Simple regression models
- Fitting a simple linear regression
- Advanced models
- Fitting a multiple linear regression
- Performing ANOVA
- Performing 2-way or higher factorial ANOVA
- Performing ANCOVA
- Fitting an advanced linear model
- Scatter plot
- Summary of fit
- Parameter estimates
- Effect of model hypothesis test
- ANOVA table
- Predicted against actual Y plot
- Lack of Fit
- Effect of terms hypothesis test
- Effect leverage plot
- Effect means
- Plotting main effects and interactions
- Multiple comparisons
- Multiple comparison procedures
- Comparing effect means
- Residual plot
- Residuals - normality
- Residuals - independence
- Plotting residuals
- Outlier and influence plot
- Identifying outliers and other influential points
- Prediction
- Making predictions
- Making inverse predictions
- Saving variables
- Logistic / Probit fit
- Study design
- Method comparison / Agreement
- Measurement systems analysis (MSA)
- Reference interval
- Diagnostic performance
- Survival/Reliability
- Control charts
- Process capability
- Pareto analysis
- Study Designs
- Bibliography

Version 6.15

Published 18-Apr-2023