# Chain-Structured Models

##### Posted on (Update: ) 0 Comments

There is an important probability distribution used in many applications, the chain-structured model.

It has the following form:

\[\pi(\mathbf x)\propto \exp\{-\sum\limits_{i=1}^dh_i(x_{i-1},x_i)\}\]where $\mathbf x=(x_0, x_1,\ldots, x_d)$. This type of model can be seen as having a “Markovian structure” because the conditional distribution $\pi(x_i\mid \mathbf x_{[-i]})$, where $\mathbf x_{[-i]}=(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_d)$ depends only on the two neighboring variables $x_{i-1}$ and $x_{i+1}$, that is,

\[\pi(x_i\mid \mathbf x_{[-i]})\propto \exp\{-h(x_{i-1},x_i)-h(x_i, x_{i+1})\}\]## Dynamic Programming

Suppose each $x_i$ in $\mathbf x$ only takes values in set $\cal S=\{s_1,\ldots, s_k\}$. Maximizing the distribution $\pi(\mathbf x)$ is equivalent to minimizing its exponent

\[H(\mathbf x)=h_1(x_0,x_1)+\cdots+h_d(x_{d-1},x_d)\]We can carry out the following recursive procedure:

- Define $m_1(x) = \underset{s_i\in \cal S}{\min}h_1(s_i, x),\; \mathrm{for}\; x=s_1,\ldots, s_k$.
- Recursively compute the function \(m_t(x) = \underset{s_i\in\cal S}{\min}\{m_{t-1}(s_i)+h_t(s_i,x)\},\;for\; x=s_1,\ldots,s_k\)

The optimal value of $H(\mathbf x)$ is obtained by $\underset{s\in\cal S}{\min}\;m_d(s)$

Then we can find out which $\mathbf x$ gives rise to the global minimum of $H(\mathbf x)$ by tracing backward as follows:

- Let $\hat x_d$ be the minimizer of $m_d(x)$; that is,

- For $t=d-1,d-2,\ldots, 1$, we let

Configuration $\hat{\mathbf x}=(\hat x_1,\ldots,\hat x_d)$ obtained by this method is the minimizer of $H(\mathbf x)$.

## Exact Simulation

In order to simulate, we can decompose the overall summation as

\[Z\equiv\sum\limits_{\mathbf x}\exp\{-H(\mathbf x)\}=\sum\limits_{x_d}\Big[\cdots\Big[\sum\limits_{x_1}\{\sum\limits_{x_0}\exp(-h_1(x_0,x_1))\}\exp(-h_2(x_1,x_2))\cdots\Big]\Big]\]Then we can adopt the following recursions.

- Define $V_1(x)=\sum_{x_0\in \cal S}\exp(-h_1(x_0, x))$
- Compute recursively for $t=2,\ldots,d$

Then, the partition function is $Z=\sum_{x\in\cal S}V_d(x)$ and the marginal distribution of $x_d$ is $\pi(x_d)=V_d(x_d)/Z$.

To simulate $\mathbf x$ from $\pi$, we can do the following:

- Draw $x_d$ from $\cal S$ with probability $V_d(x_d)/Z$
- For $t=d-1,d-2,\ldots, 1$, we draw $x_t$ from distribution

The random sample $\mathbf x=(x_1,\ldots,x_d)$ obtained in this way follows the distribution $\pi(\mathbf x)$.

The simulation code and reproduced results can be found here.

## References

Liu, Jun S. Monte Carlo strategies in scientific computing. Springer Science & Business Media, 2008.