# Rademacher Complexity

##### Posted on (Update: )

This post is based on the material of the second lecture of STAT 6050 instructed by Prof. Wicker, and mainly refer some more formally description from the book, Mehryar Mohri, Afshin Rostamizadeh, Ameet Talwalkar - Foundations of Machine Learning-The MIT Press (2012).

Firstly, he proved the margin bounds,

Let $\cF$ be a family of functions mapping from $\cX$ to $[0, 1]$. Then, for any $\delta > 0$, with probability at least $1-\delta$, each of the following holds for all $f\in\cF$:

then we will apply McDiarmid’s Inequality to function $\phi(S)$. Note that

follows from the sub-additivity of the supremum function,

and its variant,

Similarly $\phi(S’) - \phi(S) \le \frac 1m\;\forall\,i=1,\ldots,m$, then $\vert \phi(S)-\phi(S’)\vert \le \frac 1m\;\forall\,i=1,\ldots,m$. And the expectation of $\phi(S)$ can be bounded as follows,

where $\sigma_i$ is the Rademacher variables, that is uniformly distributed independent random variables taking values in $\{-1,+1\}$. Now, we can use McDiarmid’s Inequality,

which implies

thus with at least probability $1-\delta$,

that is,

combining with \eqref{eq:1}, we prove the first result. To derive a bound in terms of $\hat\fR_S(\cF)$, notice that

and similarly $\hat\fR_S(\cF) - \hat\fR_{S’}(\cF)\le 1m$, and hence $\vert \fR_S(\cF) - \fR_{S’}(\cF)\vert\le \frac 1m$. Using again McDiarmid’s Inequality (actually the another side), with probability at least $1-\delta/2$ the following holds,

which implies with at least probability $1-\delta/2$,

combing with \eqref{eq:bound_m} yields with at least probability $1-\delta$:

which proves \eqref{eq:bound_s}.

I think there are some abuse of notations. In validating the bounded difference inequality \eqref{eq:s_sprime}, $S$ and $S’$ should be treated as two sample points (not random variables) with only one different point, while in the calculation of $\E_S\phi(S)$ \eqref{eq:S_Sprime}, $S$ and $S’$ are two iid variables, and $S’$ is also called ghost sample.

Then he illustrates the Rademacher Complexity in real life for linear classifier as an application. Consider $\cF=\{x\mapsto w^Tx; \Vert w\Vert \le \Lambda\}$ and $x\in \cX$ where $\Vert x\Vert \le r$, then

where \eqref{eq:sigma_ij} follows the fact that $\sigma_i^2 = 1$, while $\E\sigma_i\sigma_j=\E\sigma_i\E\sigma_j$ if $i\neq j$ and $\E\sigma_i=0$.

Remarks:

- We are not interested in $\E w^Tx \le \hat \E_S(w^Tx) + \cdots$, but in bounding a risk function. For example, $\E_{(x, y)\sim D^m}\1_{y\neq f(x)}$.
- A less known inequality than Jensen’s is the Khinchin-Kahane inequality.

It is reported that the Khinchin-Kahane inequality is a classical inequality in the probability literature. It was initially studied by Khinchin in the real case, and later extended by Kahane to normed linear spaces. A general version of the inequality for Banach spaces (a vector space $X$ over any scalar field $K$, which is equipped with a norm $\Vert\cdot\Vert_X$ and which is complete with respect to the distance induced by the norm), as well as sharp constant in some cases is as follows:

For all $p, q\in[1, \infty)$, there exists a universal constant $C_{p,q} > 0$ depending only on $p,q$ such that for all choices of Banach spaces $\bbB$, finite sets of vectors $x_1,\ldots,x_n\in \bbB$, and independent Rademacher variables $\sigma_1,\ldots,\sigma_n$,

If moreover $p=1\le q\le 2$, then the constant $C_{1,q}=2^{1-1/q}$ is optimal.

He said that building on tons of linear functions, we can obtain results on other classes of functions, such as neural networks, that is,

where $\cF_d$ denotes the neural networks with $d$ layers.

Let $\phi$ be an $L$-Lipschitz, then

First fix a sample $S=(x_1,\ldots,x_m)$, then, by definition,

where $u_{m-1}(f)=\sum_{i=1}^{m-1}\sigma_i(\phi\circ f)(x_i)$. By definition of supremum, for any $\epsilon > 0$, there exists $f_1,f_2\in \cF$ such that

How/Is the nonnegative of $\sup_f$ guaranteed.(??)Thus, for any $\epsilon > 0$,

Since $\epsilon > 0$ is arbitrary, we have

Iterating this procedure for $i=1,\ldots,m-1$, we can conclude the proof.

He remarked that the Rademacher Complexity will be our main tool, on topics of VC-dimension and covering numbers. Now let we see more properties.

- If $\alpha\in\IR$, then $\fR_m(\alpha\cF) = \vert \alpha\vert\fR_m(\cF)$
- $\fR_m(\cF+\cG)=\fR_m(\cF) +\fR_m(\cG)$
- $\fR_m(\max(f, g); f\in\cF, g\in\cG)\le \fR_m(\cF) + \fR_m(\cG)$

- By the symmetry of $\sigma$.
- The second one can be proved via $\le$ and $\ge$, the former one follows from the sub-additivity of supremum while the latter one proved with the $1-\epsilon$ technique as in \eqref{eq:eps}.
- Decompose $\max(f,g)=\frac{f+g}{2} + \frac{\vert f-g\vert}{2}$, and observe that the absolute value is 1-Lipschitz.