WeiYa's Work Yard

A traveler with endless curiosity, who fell into the ocean of statistics, tries to write down his ideas and notes to save himself.

The Applications of Monte Carlo

Posted on (Update: ) 0 Comments
Tags: Importance Sampling, Monte Carlo Integration, Optimization

Monte Carlo Integration

In order to calculate the following integral

\[I = \int _D g(\mathbf x)d\mathbf x\]

we often adopt Monte Carlo technique

  1. Firstly, compute the volume $V$ of region $D$, \(V = \int_D d\mathbf x\)
  2. Secondly, sample $\mathbf x^{(1)}, \mathbf x^{(2)},\ldots, \mathbf x^{(m)}$ uniformly from region $D$
  3. Finally, approximate the integral by \(\hat I_m = V\frac{1}{m}\sum\limits_{i=1}^mg(\mathbf x^{(m)})\)

Actually, we can view this as a special case of importance sampling in which the importance distribution is the uniform distribution because

\[I = \int_D g(\x)d\x = V\int_D \frac 1Vg(\x)d\x\,.\]

According to law of large numbers, we have

\[\lim_{m\rightarrow \infty}\hat I_m = I\]

and from central limit theorem,

\[\frac{1}{V}\sqrt{m}(\hat I_m-I)\rightarrow N(0, \sigma^2)\]

where $\sigma^2 = var(g(\mathbf x))$

Importance Sampling

It is worthy noting that the above sampling is uniform. However, it is often difficult to produce uniform random samples in an arbitrary region $D$. To overcome this problem, we can adopt importance sampling in which one generates random samples $\mathbf x^{(1)}, \mathbf x^{(2)},\ldots, \mathbf x^{(m)}$ from a nonuniform distribution $\pi(\mathbf x)$ that puts more probability mass on important parts of the state space $D$. Then one can estimate integral $I$ as

\[\hat I_m = V\frac{1}{m}\sum\limits_{j=1}^m\frac{g(\mathbf x^{(j)})}{\pi(\mathbf x^{(j)})}\]

Optimization Problem

Suppore we need to find the minimum of a target function $h(\mathbf x)$. The problem is equivalent to finding the maximum of another function, $q_T(\mathbf x)=\exp(-h(\mathbf x)/T)$ (as long as $T>0$).

In the case when $q_T(\mathbf x)$ is integrable for all $T>0$, we can make up a family of probability distributions:

\[\pi_T(\mathbf x)\propto \exp(-h(\mathbf x)/T),\; T>0\]

If we can sample from $\pi_T(\mathbf x)$ when $T$ is sufficiently small, resulting random draws will most likely be located in the vicinity of the global minimum of $h(\mathbf x)$. This consideration is the basis of the well-known simulated annealing algorithm and is also key to the tempering techniques for designing more efficient Monte Carlo algorithms.

References

Liu, Jun S. Monte Carlo strategies in scientific computing. Springer Science & Business Media, 2008.


Published in categories Note