WeiYa's Work Yard

A dog, who fell into the ocean of statistics, tries to write down his ideas and notes to save himself.

FDR Control via Data Splitting

Posted on (Update: )
Tags: False Discovery Rate

This note is for Dai, C., Lin, B., Xing, X., & Liu, J. S. (2020). False Discovery Rate Control via Data Splitting. ArXiv:2002.08542 [Stat].

  • based on the Benjamin-Hochberg (BHq) procedure
  • based on the “knockoff filtering” idea: does not require p-values for individual features, and achieves FDR control by creating “knockoff” features in a similar spirit as adding spike-in controls in biological experiments.
    • fixed-design knockoff filter
    • model-X knockoff filter

The paper proposes an FDR control framework based on data splitting.

In high-dimensional regressions, it can be challenging to either construct valid p-values (even asymptotically) or estimate accurately the joint distribution of features, thus limiting the applicability of both BHq and the model-X knockoff filter. The data-splitting framework proposed here appears to fill this gap.

  1. split the data into two halves
  2. apply two potentially different statistical learning procedures to each part of the data

Different from the main motivation of most existing methods for DS to handle the high-dimensionality, the paper aims at obtaining two independent measurements of the importance of each feature via data splitting. FDR control is achieved by constructing a proper test statistic for each feature based on these two measurements.

A similar strategy as in the knockoff filter to estimate the number of false positives. Main idea: construct a test statistic $M_j$ for each feature $X_j$, referred to as the “mirror statistic”, with the following two key properties

  • a feature with a larger mirror statistic is more likely to be a relevant feature
  • the sampling distribution of the mirror statistic of any null feature is symmetric about 0.

MDS in built upon multiple independent replications of DS, aiming at reducing the variability of the selection result. Instead of ranking features by their mirror statistics, MDS ranks features by their inclusion rate, which are selection frequencies adjusted by selection sizes, among multiple DS replications.

For each null feature index $j\in S_0$, the sampling distribution of either $\hat\beta_j^{(1)}$ or $\hat\beta_j^{(2)}$ is symmetric about 0.

A general form of the mirror statistic $M_j$ is

\[M_j = \sign(\hat\beta_j^{(1)}\hat\beta_j^{(2)})f(\vert \hat\beta_j^{(1)}\vert, \vert \hat\beta_j^{(2)}\vert)\]

where function $f(u, v)$ is non-negative, symmetric about $u$ and $v$, and monotonically increasing in both $u$ and $v$.

over estimate of FDP (!!!) what is gap?

In order to obtain a good estimate of the number of false positives, the mirror statistics of the null features cannot be too correlated.

The mirror statistics $M_j$’s are continuous random variables, and there exist constants $c > 0$ and $\alpha \in (0, 2)$ such that

\[var(\sum_{j\in S_0}1(M_j > t)) \le cp_0^\alpha\,,\quad \forall t\in\IR, \quad\text{where } p_0 = \vert S_0\vert\,.\]

Assumption 2.2 only restricts the correlations among the null features in consideration, regardless of the correlations associated with the relevant features.

Suppose $var(M_j)$ is uniformly upper bounded and also lower bounded away from zero. For any nominal FDR level $q\in (0, 1)$, assume that there exists a constant $\tau_q > 0$ such that $P(FDP(t_q)\le q)\rightarrow 1$ as $p\rightarrow\infty$. Then, under Assumptions 2.1 and 2.2, the DS procedure satisfies

\[FDP(t_q) \le q+o_p(1) \quad \limsup_{p\rightarrow \infty} FDR(\tau_q) \le q\]

For $t\in \IR$, denote

\(\hat G_p^0(t) = \frac{1}{p_0}\sum_{j\in S_0}1(M_j > t), \qquad G_p^0(t) = \frac{1}{p_0} \sum_{j\in S_0}P(M_j > t)\,, \hat G_p^1(t) = \frac{1}{p_1}\sum_{j\in S_1}1(M_j > t), \qquad \hat V_p^0(t) = \frac{1}{p_0}\sum_{j\in S_0}1(M_j < -t)\) Let $r_p = p_1/p_0$. In addition, denote \(FDP_p(t) = \frac{\hat G_p^0(t)}{\hat G_p^0(t) + r_p\hat G_p^1(t)}\,, FDP_p^\dagger(t) = \frac{\hat V_p^0(t)}{\hat G_p^0(t)+r_p\hat G_p^1(t)}, \overline{FDP}_p(t) = \frac{G_p^0(t)}{G_p^0(t)+r_p\hat G_p^1(t)}\,.\)

Under Assumption 2.2, if $p_0\rightarrow \infty$ as $p \rightarrow\infty$, we have in probability, \(\sup_{t\in \IR}\vert \hat G_p^0(t) - G_p^0(t)\vert \rightarrow 0, \quad \sup_{t\in \IR}\vert \hat V_p^0(t) - G_p^0(t)\vert \rightarrow 0\,.\)

note that the existence of $t_q > 0$ essentially guarantees the asymptotic feasibility of FDR control based upon the rankings of features by their mirror statistics. This is only a technical assumption for handling the most general setting without specifying a parametric model between the response and features.

Suppose DS asymptotically controls the FDP for any designated level $q\in (0, 1)$. Further, we assume that with probability approaching one, the power of DS is bounded below by some $\kappa > 0$. We consider the following two regimes with $n, p\rightarrow\infty$ at a proper rate.

  • In the non-sparse regime where $\liminf p_1/p > 0$, we assume that the mirror statistics are consistent at ranking features, that is, $\sup_{i\in S_1, j\in S_0}P(I_i < I_j) \rightarrow 0$.
  • In the sparse regime where $\limsup p_1/p = 0$, we assume that the mirror statistics are strongly consistent at ranking features, that is, $\sup_{i\in S_1}P(I_i < \max_{j\in S_0}I_j)\rightarrow 0$ Then, for MDS in both non-sparse and the sparse regimes, we have \(FDP \le q + o_p(1)\quad \text{and}\quad \limsup_{n,p\rightarrow\infty} FDR\le q\)

Specializations for Different Statistical Models

Linear Models

Suppose the data is generated from the model $y=X\beta^\star + \epsilon$, where $\epsilon \sim N(0, \sigma^2I_n)$.

Focus on the random-design scenario, in which each row of the design matrix $X$ follows a $p$-dimensional distribution with a covariance matrix $\Sigma$ independently.

Consider the following Lasso+OLS procedure:

  1. apply Lasso to the first half of the data $(y^{(1)}, X^{(1)})$ to get the coefficient estimate $\hat\beta^{(1)}$ and $\hat S^{(1)}={j: \hat\beta_j^{(1)}\neq 0}$
  2. restricted to the set of features in $\hat S^{(1)}$, apply OLS to get the estimate $\hat\beta^{(2)}$ using the second half of the data $(y^{(2)}, X^{(2)})$.

If the sure screening property holds for Lasso, that is, all the relevant features are selected in step (a), then for any selected null feature $j\in S_0\cap \hat S^{(1)}$, its OLS estimate $\beta_j^{(2)}$ follows a Normal distribution with mean zero conditioning on $X^{(2)}$. Thus the symmetry assumption is satisified.

If $j \in S_0\cap \hat S^{(0)}$, then $\hat\beta_j^{(2)}$ can be viewed as 0.

Further, using Mehler’s identity, the weak dependence assumption holds with high probability under the regularity condition and the tail condition in Assumption 3.1.

  1. (Signal strength condition) $\min_{j\in S_1}\vert \beta_j^\star\vert » \sqrt {p_1\log p/n}$
  2. (Regularity condition) $1/c < \lambda_\min(\Sigma) \le \lambda_\max(\Sigma) < c$ for some $c > 0$
  3. (Tail condition) $X\Sigma^{-1/2}$ has independent sub-Gaussian rows.
  4. (Sparsity condition) $p_1=o(n/\log p)$ and $p_1\rightarrow \infty$.

Consider both DS and MDS, of which the two coefficient estimates $\hat\beta^{(1)}$ and $\hat\beta^{(2)}$ are constructed using the Lasso+OLS procedure. For any nominal FDR level $q\in (0, 1)$, under Assumption 3.1, we have

\(\limsup_{n, p\rightarrow \infty} FDR \le q\qquad \liminf_{n, p\rightarrow \infty} Power = 1\) in the asymptotic regime where $\log p=o(n^\xi)$ for some $\xi\in (0, 1)$.

By Assumption 3.1 as $n,p\rightarrow \infty$, the event $E_1 = {S_1\subset \hat S^{(1)}}$ holds with probability approaching 1.

In the following, implicitly condition on the desired $(X^{(1)}, y^{(1)})$ such that $E_1$ holds.

Let $\hat S_0=\hat S^{(1)}\cap S_0, \hat p_0 = \vert \hat S_0\vert$ and $\hat p = \vert \hat S^{(1)}\vert$.

Since $p_1\rightarrow \infty$, by the sure screening probability $\hat p\rightarrow \infty$.

Without loss of generality, assume $\hat p_0\rightarrow\infty$, otherwise the FDR control problem becomes trivial.

Define $R$ and its normalized version $R^0$ as

\[R = \left(\frac{1}{n/2}X^{(2)}_{\hat S^{(1)}}{}^\top X^{(2)}_{\hat S^{(1)}}\right)^{-1}, \quad R_{ij}^0 = \frac{R_{ij}}{\sqrt{R_{ii}R_{jj}}}\,,\]

in which $R^0$ characterizes the correlation structure of the OLS regression coefficients $\hat\beta^{(2)}$.

Let

\[\Vert R^0_{\hat S_0}\Vert_1 = \sum_{i, j\in \hat S_0} \vert R_{i, j}\vert\quad \Vert R^0_{\hat S_0}\Vert_2 = (\sum_{i, j\in \hat S_0} \vert R_{i, j}\vert^2)^{1/2}\]

We have the following probabilistic bound.

Under Assumption 3.1, we have $\Vert R^0_{\hat S_0}\Vert_1 = O_p(\hat p_0^{3/2})$.

Define the event $E_2 = {\Vert R^0_{\hat S_0}\Vert_1 \le c_1\hat p_0^{3/2}}$ holds for some constant $c_1 > 0$.

Define $\hat G_p^0(t), G_p^0(t), \hat V_p^0(t)$ in analogy to the above, by replacing $p_0, p, S_0$ with $\hat p_0, \hat p, \hat S_0$, respectively.

For any $t\in \IR$, under Assumption 3.1, we have in probability

\[\vert \hat G_p^0(t) - G_p^0(t)\vert \rightarrow 0, \quad \vert \hat V_p^0(t) - G_p^0(t)\vert \rightarrow 0\,.\]

Prove the first claim by conditioning on the desired $(X^{(1)}, y^{(1)})$ and $X^{(2)}$ such that the high probability event $E_1\subset E_2$ holds. We have the following decomposition,

\[\Var(\hat G_p^0(t)) = \frac{1}{\hat p_0^2} \sum_{j\in \hat S_0}\Var(1(M_j > t)) + \frac{1}{\hat p_0^2}\sum_{i\neq j\in \hat S_0}\Cov(1(M_i>t), 1(M_j > t))\,.\]

The first term is bounded by $1/\hat p_0$.

view it as a Bernoulli random variable

For the second term, w.l.o.g., assume $\hat\beta_i^{(i)} > 0$ and $\hat\beta_j^{(1)} > 0$. Note that $(\hat\beta_i^{(2)}, \hat\beta_j^{(2)})$ follows a bivariate Normal distribution with correlation $R_{ij}^0$.

Consider the general form of the mirror statistic, in which function $f(u, v)$ is non-negative, symmetric about $u$ and $v$, and monotonically increasing in both $u$ and $v$. For any $t$ and $u \ge 0$, let

\[I_t(u) = \inf\{v\ge 0: f(u, v) > t\}\]

with the convention $\inf\emptyset = +\infty$.

Using Mehler’s identity, for any $t_1, t_2\in \IR$,

\[\Phi_r(t_1, t_2) = \Phi(t_1) \Phi(t_2) + \sum_{n=1}^\infty \frac{r^n}{n!}\phi^{(n-1)}(t_1)\phi^{(n-1)}(t_2)\,.\]

together with Lemma 1 in Azriel and Schwartzman (2015), i.e.,

\[\sum_{n=1}^\infty \frac{[\sup_{t\in\IR}\phi^{(n-1)}(t)]^2}{n!} < \infty \,,\]

we have

\[P(M_i > t, M_j > t) = P(\hat\beta_i^{(2)} > I_t(\hat\beta_i^{(1)}), \hat\beta_j^{(2)} > I_t(\hat\beta_j^{(1)})) \le P(\hat\beta_i^{(2)} > I_t(\hat\beta_i^{(1)}))P(\hat\beta_j^{(2)} > I_t(\hat\beta_j^{(1)})) + O(\vert R_{ij}^0\vert)\]
\[Cov(1(M_i > t), 1(M_j > t)) = \bbE[(1(M_i > t) - \bbE 1(M_i > t))(1(M_j > t) - \bbE 1(M_j > t))] = P(M_i > t, M_j > t) - P(M_i > t) P(M_j > t)\]

Conditioning on the event $E_2$, it follows that

\[\frac{1}{\hat p_0^2}\sum_{i\neq j\in S_0} Cov(1(M_i > t), 1(M_j > t)) \le \frac{c_2}{\hat p_0^2}\Vert R^0_{\hat S_0}\Vert_1 \le \frac{c_1c_2}{\sqrt{\hat p_0}}\]

for some constant $c_2 > 0$. The proof of Lemma 1.5 concludes using the Markov’s inequality.


Published in categories Note