Let $X \sim N(\mu, \sigma^2)$ with unknown $\mu$ and $\sigma^2$. We can assume that $\mu$ and $\sigma^2$ are independent.
The distributions for $\mu$ will follow the [[normal-normal conjugate]] and the distributions for $\sigma^2$ follow the [[inverse gamma-normal conjugate]]. The joint distributions for $\mu, \sigma^2$ will then be the product of the normal and inverse gamma distributions by the definition of [[joint probability]] for two [[independent]] variables.
**Priors**
$\mu | \sigma^2 \sim N \Big( \mu_0, \frac{\sigma^2}{ k_0} \Big)$
$\sigma^2 \sim \Gamma^{-1} \Big ( \frac{v_0}{2}, \frac{v_0 \sigma^2_0}{2} \Big) $
> [!NOTE]
> Note that the inverse gamma parameterized in this way is equivalent to the **scaled-inverse** [[chi-squared distribution]] $\sigma^2 \sim \text{scaled-inv} \chi^2 (v_o, \sigma^2_0)$.
$(\mu, \sigma^2) \sim N \Big ( \mu, \sigma^2 \Big | \mu_0, \frac{\sigma_2}{k_0} \Big ) \cdot \Gamma^{-1} \Big ( \mu, \sigma^2 \Big | v_0, \sigma^2_0 \Big)$
where $v_0$ is the [[degrees of freedom]] on the prior and $k_0$ can be thought of as the prior sample size.
**Posteriors**
$\mu, \sigma^2 | x \sim N \Big (\mu, \sigma^2 \Big | \mu_n, \frac{\sigma^2_n}{k_n} \Big) \cdot \Gamma^{-1} \Big ( \mu, \sigma^2 \Big | v_n, \sigma^2_n \Big )$
where
$k_n = k_0 + n$
$\mu_n = \frac{k_0}{k_n} \mu_0 + \frac{n}{k_n} \bar x $
$v_n = v_0 + n$
$\sigma^2_n = \frac{1}{v_n} \Big (v_0 \sigma^2_0 + (n - 1)S^2 + \frac{k_0 n}{k_n}(\bar x - \mu_0)^2 \Big )$
## R
Luckily for us, this is easy to simulate in [[R]] with [[Markov Chain Monte Carlo]]. For some large number of repetitions,
1. Draw $\sigma^2$ from $\sigma^2 \, | \, \mathbf{x}$
2. Draw $\mu$ from $\mu \, | \, \sigma^2, \mathbf{x}$, where $\sigma^2$ is from step 1.
```R
# Read data
hubble = read.table(
url(
"https://raw.githubusercontent.com/bzaharatos/STAT-5630/main/hubble.txt"
), sep = "\t")
n = length(hubble$x)
xbar = mean(hubble$x)
s2 = var(hubble$x)
# Prior parameters
k0 = 1; v0 = 1; sig2_0 = 20; mu0 = 20
# Posterior paramaters
kn = k0 + n; vn = v0 + n
sig2_n = (1/vn)*(v0*sig2_0 + (n-1)*s2 + k0*n/kn*(xbar-mu0)^2)
mu_n = k0/kn*mu0 + n/kn*xbar
# Simulate
m = 10000
post = matrix(NA, nrow=m, ncol=2)
for (i in 1:m){
# Posterior of sigma^2 | x;
# OR: rinvchisq(1, df = n-1, scale= var(hubble$x))
post[i,2] = 1/rgamma(1, vn/2, vn*sig2_n/2)
# Posterior mu | sigma^2 and x
post[i,1] = rnorm(1, mu_n, sqrt(post[i,2]/n))
}
# Get estimators as mean
colMeans(post)
```
Note we can also specify these analytically rather than numerically through simulation because the form is known.
```R
library(LaplacesDemon) # for dinvgamma
# Create parameter grids
mu_grid = seq(-25,50,length.out = 1000)
prior_mu = dnorm(mu_grid, mu0, sqrt(sig02/k0))
# Calculate posterior
posterior_sig2 = dinvgamma(sig2_grid, vn/2, vn*sign2/2)
posterior_mu = dnorm(mu_grid, mu_n, sqrt(sign2/kn))
# Get MAP estimators
map_mu = mu_grid[which.max(posterior_mu)]
map_sig2 = sig2_grid[which.max(posterior_sig2)]
```****