The mean squared error is a property of an [[estimator]] and can be used to compare two estimators to determine the "best" based on bias and variance.
Let $\hat \theta$ be an [[estimator]] of a [[parameter]] $\theta$. The mean squared error (MSE) of $\hat \theta$ is defined as
$MSE(\hat \theta) = E[(\hat \theta - \theta)^2]$
If $\hat \theta$ is an unbiased estimator of $\theta$, its mean squared error is simply the [[variance]] of $\theta$.
The computation formula for MSE is
$MSE(\hat \theta) = Var[\hat \theta] + (B[\hat \theta])^2$
where the variance of $\hat \theta$ is given by
$Var[\hat \theta] = E[(\hat \theta - E[\hat \theta])^2]$
where the bias of $\hat \theta$ is given by
$B(\hat \theta) = E[\hat \theta] - \theta$
## relative efficiency
Between two unbiased estimators $\hat \theta_1$ and $\hat \theta_2$, $\hat \theta_1$ is more efficient that $\hat \theta_2$ if $Var[\hat \theta_1] < Var[\hat \theta_2]$. Relative efficiency is a measure of how much more efficient $\hat \theta_1$ is. Relative efficiency is defined as
$Eff(\hat \theta_1, \hat \theta_2) = \frac{Var[\hat \theta_2]}{Var[\hat \theta_1]}$
> [!Tip]- Additional Resources
> - https://www.youtube.com/watch?v=XqWfeND04vs