The inverse [[gamma distribution]] is useful for finding the posterior of $\sigma^2$ given [[independent and identically distributed|iid]] data from the [[normal distribution]] and known mean $\mu$. Let $X \overset{iid}{\sim} N(\mu, \sigma^2)$ with known $\mu$ and the [[prior]] distribution on $\sigma^2$ is the inverse gamma distribution.
$\sigma^2 \sim \Gamma^{-1}(a,b) \to \Gamma^{-1} \Big( \frac n2 + a, \frac{SSE}{2} + b \Big )$
When both the mean and variance are unknown, use the procedure in [[estimating the mean and variance of normally distributed data]].
**Prior**
$\pi(\sigma^2) \propto ({\sigma^2})^{-(a+1)} \text{exp} \Big \{\frac{-b}{ \sigma^2} \Big \}$
**Likelihood**
$
f(x | \sigma^2, \mu) \propto (\sigma^2)^{-n/2} \exp \Big \{-\frac{1}{2 \sigma^2} \sum (x_i - \mu)^2 \Big \}
$
**Posterior**
$\pi(\sigma^2|x, \mu) \propto ({\sigma^2})^{-(n/2 + a+1)} \text{exp} \Big \{-\frac{1}{\sigma^2} \Big(b + \frac{\sum(x_i - \mu)^2}{2} \Big) \Big \}$
A point estimate for $\sigma^2$ could be
$E(\sigma^2 | x) = \frac{SSE + 2b}{n - 1 + 2a - 1}$
where $SSE = \sum(x_i - \mu)^2$