Normalizing Constant
Posted on
Larry discussed the normalizing constant paradox in his blog.
Suppose that $f(\theta)=g(\theta)/c$, where $g(\theta)$ is known but we cannot compute the integral $c=\int g(\theta)d\theta$. Given a sample $\theta_1,\ldots,\theta\sim f$, our interest is to estimate $c$.
It is direct to get the frequentist estimator
\[\hat c = \frac{g(\theta_0)}{\hat f(\theta_0)}\,.\]The Bayesian analysis by placing a prior $h(c)$ on $c$ seems to be wired, because the posterior
\[h(c\mid \theta_1,\ldots,\theta_n) = h(c) \prod_{i=1}^n\frac{g(\theta_i)}{c}\propto h(c)c^{-n}\]is useless, which doesn’t depend on the data.
This treatment acted as if we had a family of densities $f(\theta\mid c)$ indexed by $c$. But we don’t: $f(\theta)=g(\theta)/c$ is a valid density only for one value of $c$, namely, $c=\int g(\theta)d\theta$.
Larry asked and concluded in the end,
What is a valid Bayes estimator of $c$? Pretending I don’t know $g$ or simply declaring it to be a non-statistical problem seem like giving up.
I really think there should be a good Bayesian estimator here but I don’t know what it is.
In addition to the post, there are many insightful comments.
It is necessary to check out the discussion on stack exchange – Bayesians: slaves of the likelihood function? and xi’an’s og: estimating a constant.