Basic Principles of Monte Carlo
Posted on 0 Comments
One Lemma
Suppose $U\sim Uniform[0, 1]$ and $F$ is a one-dimensional cumulative distribution function (cdf). Then, $X=F^{-1}(U)$ has the distribution $F$. Here we define $F^{-1}(u)=\inf\{x;F(x)\ge u\}$
This lemma can be showed as follows briefly.
\[\begin{array}{ll} p(X < x) &= p(F^{-1}(U)< x)\\ & = 1-p(F^{-1}(U)\ge x)\\ & = 1-p(U\ge F(x))\\ & = p(U<F(x))\\ & = F(x) \end{array}\]This lemma suggests to us an explicit way of generating a one-dimensional random variable when its cdf is available. However, many distributions do not have a closed-form cdf, it is often difficult to directly apply the above inversion procedure.
Some distributions with nice mathematical properties can be drawed by using special techniques. For those mathematical less convenient distributions, von Neumann (1951) proposed a very general algorithm, the rejection method, which can be applied to draw from any probability distribution with a density function given up to a normalizing constant, regardless of dimensions.
The Rejection Method
Suppose $l(\mathbf x)=c\pi(\mathbf x)$ is computable, where $\pi$ is a probability distribution function or density function and $c$ is unknown. If we can find a sampling distribution $g(\mathbf x)$ and “covering constant” $M$ so that the envelope property, $Mg(\mathbf x)\ge l(\mathbf x)$, is satisfied for all $\mathbf x$, then we can apply the following rejection procedure.
- Draw a sample $\mathbf x$ from $g()$ and compute the ratio \(r = \frac{l(\mathbf x)}{Mg(\mathbf x)}(\le 1)\)
- Flip a coin with success probability $r$, if the head turns up, then accept and return $\mathbf x$; otherwise, reject $\mathbf x$ and go back to last step.
Finally, the accepted sample follows the target distribution $\pi$.
Let $I$ be the indicator function so that $I=1$ if the sample $\mathbf X$ drawn from $g()$ is accepted, and $I = 0$, otherwise. Then, we have
\[P(I=1)=\int P(I=1\mid \mathbf X=\mathbf x)g(\mathbf x)d\mathbf x=\int \frac{c\pi(\mathbf x)}{Mg(\mathbf x)}g(\mathbf x)d\mathbf x=\frac{c}{M}\]Hence, \(p(\mathbf x\mid I = 1)=\pi(\mathbf x)\)
The excepted number of “operations” for obtaining one accepted sample is $M$, so the key to a successful application of the algorithm is to find a good trial distribution $g(\mathbf x)$ which gives rise to a small $M$.
Let us take the Truncated Gaussian distribution as an example. Suppose we want to draw random samples from $\pi(x)\propto \phi(x)I(x>c)$, where $\phi(x)$ is the standard normal density and $I$ is the indicator function.
- If $c < 0$, we can continue to generate random samples from a standard Gaussian distribution until a sample satisfying $X>c$ is obtained. In the worst case, the efficiency of this method is $50\%$.
- If $c > 0$, especially when $c$ is large, the above strategy is very inefficient. We can use the rejection method with an exponential distribution as the envelope function. Suppose the density of this exponential distribution has the form $\lambda_0 \exp(-\lambda_0x)$. We want to find the smallest constant $b$ such that
Variance Reduction Methods
Stratified Sampling
Control Variates Method
Antithetic Variates Method
Rao-Blackwellization
References
Liu, Jun S. Monte Carlo strategies in scientific computing. Springer Science & Business Media, 2008.