The [multiple comparison problem](https://en.wikipedia.org/wiki/Multiple_comparisons_problem "wikipedia:Multiple comparisons problem"), or multiple testing problem, arises in [[base/Statistics/statistical inference]] when making inferences after conducting multiple tests at the same time.
The probability of a [[Type I Error|false positive]] increases with the number of tests conducted. Specifically, the probability of a false positive at the 5% confidence level across 20 tests is $1 - 0.95^{20} = 64.15\%$. You shouldn't be surprised to find a significant "link" between any of 20 predictors and your outcome variable. This works in reverse as well, where a single predictor used on 20 response variables will also have a 64% chance of a false positive.
To avoid the multiple comparison problem, use a **simultaneous test** or control for the [False Discovery Rate](https://en.wikipedia.org/wiki/False_discovery_rate "wikipedia:False discovery rate") which is the number of (expected) false positives compared to all positive results.