Cross validation is used for [[model evaluation]] and [[hyperparameter tuning]]. Popular techniques include [[k-fold cross-validation]], [[stratified k-fold cross-validation]], [[leave-one-out cross-validation]], or [[time series cross-validation]].
For model evaluation, cross validation can be used to evaluate a model without withholding data from training, which is especially useful when the dataset is small.
In the context of hyperparameter tuning, cross validation can be performed multiple times using a range of hyperparameter values (or combinations thereof) to select the best hyperparameter value. In this context, a test set can still be withheld and used once the best hyperparameter value or values has been selected to evaluate the model. In effect, only the training dataset is used in cross validation.