For data from the [[Binomial distribution]] $X \overset{iid}{\sim} Binomial(1,p)$, let the [[prior]] distribution on $p$ be a [[beta distribution]], then the [[posterior]] will also be a beta distribution.
$
p \sim Beta(\alpha, \ \beta) \to p \sim Beta(\alpha + n \bar x, \ \beta + n - n \bar x)
$
Notice we simply add the number of successes to $\alpha$ and the number of failures to $\beta$.
**Prior distribution**
$\pi(p) \propto p^{\alpha -1}(1-p)^{\beta - 1}$
**Likelihood**
$f(x | p) \propto p^{n \bar x} (1-p)^{n - n \bar x}$
**Posterior**
$
\pi(p) \propto p^{\alpha + n \bar x - 1}(1-p)^{\beta + n - n \bar x - 1}
$
A point estimator for $p$ could be the mean
$E(p | x) = \frac{n \bar x + \alpha}{n + \alpha + \beta}$
See the [[Jeffreys' prior for the binomial distribution]].
Note that $\bar x$ is the proportion of successes and the [[maximum likelihood estimator|MLE]] for $\hat p$. You can think of $\alpha$ and $\beta$ to be the prior proportion $p$, where $\hat p = \bar x$ and
$\hat p_0 = E(p) = \frac{\alpha}{\alpha + \beta}$
As $n$ increases, the influence of $\alpha$ and $\beta$ diminish. In fact, we can think of this posterior as a weighted average between our estimate of $p$ before seeing data and after.
$E(p | x) = \frac{n \bar x + \alpha}{n + \alpha + \beta} = (1 - w_n)\hat p_0 + w_n \hat p$
where
$w_n = \frac{n}{n + \alpha + \beta}$
In the limit $\alpha = \beta \to 0$, the Bayesian estimator and MLE will be the same, however $\alpha$ and $\beta$ must be greater than $0$.
## tuning the beta distribution
Tuning the beta distribution is common when specifying a prior. Use the formula for the mean of the beta distribution to estimate the ratio of the shape parameters $\alpha$ and $\beta$. For example, for a distribution centered at $0.45$
$E(\pi) = \frac{a}{\alpha+\beta} = 0.45$
with some rearranging we get
$\alpha = \frac{0.45}{0.55} \beta = \frac{9}{11}\beta$
Any beta distribution with that ratio of $\alpha:\beta$ will work. Larger values for $\alpha$ and $\beta$ will have lower variances.