Errors in [[hypothesis testing]] result from randomness in the sample, not mistakes in the analysis. Domain experts can help design hypothesis tests by indicating the tolerance for errors in hypothesis testing. We classify two types of errors in hypothesis testing:
- **[[Type I Error]]** (false positive): incorrectly reject the null hypothesis
- **[[Type II Error]]** (false negative) : incorrectly fail to reject the null hypothesis
The two types of error are summarized in the following table.
| | Fail to Reject $H_0$ | Reject $H_0$ |
| --------------- | :------------------: | :----------: |
| **$H_0$ True** | ✅ | Type I Error |
| **$H_1$ False** | Type II Error | ✅ |
You can only make a Type I error in the universe in which the null hypothesis is true. You can only make a Type II error in the universe in which the null hypothesis is false.
Type I error and Type II error are not complementary values, however there is often a tradeoff between the two. In fact, in some cases, they are in fact complementary ( $\beta = 1 - \alpha$ ).
Neither type of error is worse.
For example, in [[natural resources management]], Type II errors are sometimes more important to control than Type I errors. Recovery of ecosystems that cross critical thresholds can be very energy intensive. Failing to detect a change in vegetation composition (i.e., failing to reject the null hypothesis) can be worse than mistakenly determining a change occurred when in fact it hadn't. For that reason, hypothesis testing in natural resource management is often conducted with lower size to increase [[statistical power]].