Using testing-based methods such as backwards selection, forwards selection, or stepwise selection is not supported by statistical principles. It should be obvious that a model selected in this way need not be the optimal model. These methods also suffer from the [[multiple comparison problem]]. Criterion-based methods are preferred and include [[Akaike Information Criterion]] (AIC), [[Bayes Information Criterion]] (BIC). [[Adjusted R-squared]] is also an option. Finally, [[mean squared prediction error]] can also be used to compare models. It is said that AIC is best for models used for [[prediction]] and BIC is best for models intended for [[explanation]]. The ANOVA test is suitable for comparing two models where one is a subset of another, however it is not practical to compare all models one-by-one, which would also be subject to the [[multiple comparison problem]].