WeiYa's Work Yard

A dog, who fell into the ocean of statistics, tries to write down his ideas and notes to save himself.

Adaptive Importance Sampling

Posted on 0 Comments
Tags: Importance Sampling

One Simple Way

  1. start with a trial density, say $g_0(x) = t_\alpha (x; \mu_0,\Sigma_0)$
  2. with weighted Monte Carlo samples, estimate the parameters, mean and covariate matrix, and construct new trial density, say $g_1(x) = t_\alpha (x; \mu_1,\Sigma_1)$
  3. construct a certain measure of discrepancy between the trial distribution and the target distribution, such as the coefficient of variation of importance weights, does not improve any more.

Parametric form

Assume a parametric form for the new trial density $g_1(\mathbf x)$, say $g(\mathbf x; \lambda)$. Then, try to find the optimal choice of $\lambda$, minimize the coefficient of variation of the importance weights based on the current sample.

For $\lambda = (\epsilon, \mu, \Sigma)$, \(g(\mathbf x; \lambda) = \epsilon g_0(\mathbf x) + (1-\epsilon)t_{\nu} (\mathbf x; \mu, \Sigma)\)

One Example

Implement an Adaptive Importance Sampling algorithm to evaluate mean and variance of a density

\[\pi(\mathbf{x})\propto N(\mathbf{x;0}, 2I_4) + 2N(\mathbf{x; 3e}, I_4) + 1.5 N(\mathbf{x; -3e}, D_4)\]

where $\mathbf{e} = (1,1,1,1), I_4 = diag(1,1,1,1), D_4 = diag(2,1,1,.5)$.

Firstly, adopt the simple way, and implement the procedure in R.

Then consider the parametric form, there are two steps to do

  1. start with a trial density $g_0 =t_{\nu}(0, \Sigma)$
  2. Recursively, build \(g_k(\mathbf{x})=(1-\epsilon)g_{k-1}(\mathbf {x}) + \epsilon t_{\nu}(\mu, \Sigma)\) in which one chooses $(\epsilon, \mu, \Sigma)$ to minimize the variation of coefficient of the importance weights.

The R code is as follows

TODO


Published in categories Note