Consider two models, a full model $\Omega$ and a reduced model $\omega$. The reduced model has a subset of the predictors in the full model. Because we prefer the more parsimonious model, we would need evidence that the full model is better than the reduced model to select it. Each model has its own [[residual sum of squares|RSS]] which we can use to construct a [[test statistic]] with the [[F-distribution]]. $F = \frac{(RSS_\omega - RSS_\Omega)/(p-q)}{RSS_\Omega / \Big(n - (p+1)\Big)} \sim F_{p-q, \ n-(p+1)}$ where $p$ is the number of predictors in the full model $\Omega$ and $q$ is the number of predictors in the reduced model $\omega$. A special case of this F-test is called the **Full F-Test**, which compares the full model to a model with no predictors. In [[R]] the last line of the `summary` of a linear model will show the `F-statistic` for the Full F-Test. If the [[p-value]] of the F-statistic is high, fail to reject the null hypothesis that the model with no predictors at all is sufficient. Always check the p-value for the F-statistic before reviewing any [[hypothesis test for individual regression parameters]]. To get a partial F-test in [[R]], fit both the full and reduced models and then use `anova` to calculate the F-statistic. ```R anova(lm_reduced, lm_full) ```