WeiYa's Work Yard

A dog, who fell into the ocean of statistics, tries to write down his ideas and notes to save himself.

Evaluate Variational Inference

Posted on
Tags: Variational Inference

A brief summary of the post, Eid ma clack shaw zupoven del ba.

In variational inference, we try to find the member $q*(\theta)$ of some tractable set of distributions $\cQ$ (commonly the family of multivariate Gaussian distributions with diagonal covariance matrices) that minimizes the Kullback-Leibler divergence,

\[q^*(\theta) = \argmin_{q\in\cQ} \KL[q(\cdot)\Vert p(\cdot\mid y)]\,.\]

Automatic Differentiation Variation Inference (ADVI) can find $q^*(\theta)$ by a fairly sophisticated stochastic optimization method.

But how can we check if the approximate posterior $q^*(\theta)$ is a good approximation to the true posterior $p(\theta\mid y)$. The post introduced two ideas.

Based on PSIS

If $q(\theta)$ is a good approximation to the true posterior, it can be used as an importance proposal to compute expectations w.r.t. $p(\theta\mid y)$.

The intuition of Pareto-Smoothed Importance Sampling (PSIS) is that replacing the “noisy” sample importance weights with the model-based estimates (generalized Pareto), which can reduces the variance of the resulting self-normalized importance sampling estimator and reduces the bias compared to other options.

Based on VSBC

Actually, variational inference is often quite bad at estimating a posterior. On the other hand, the centre of the variational posterior is much more frequently a good approximation to the centre of the true posterior.

Variational Simulation-Based Calibration (VSBC) assessed the average performance of the implied variational approximation to univariate posterior marginals, and it can indicate if the centre of the variational posterior will be, on average, biased.

References


Published in categories Memo