WeiYa's Work Yard

A dog, who fell into the ocean of statistics, tries to write down his ideas and notes to save himself.

Test of Monotonicity

Posted on
Tags: Monotone Function, Economy

This note is for Chetverikov, D. (2019). TESTING REGRESSION MONOTONICITY IN ECONOMETRIC MODELS. Econometric Theory, 35(4), 729–776.

Monotonicity is a key qualitative prediction of a wide array of economic models derived via robust comparative statics.

  • A general nonparametric framework for testing monotonicity of a regression function

The concept of monotonicity plays an important role in economics.

  • in economic theory, monotone comparative statics has been a popular research topic for many years
  • in industrial organization, lack of monotonicity has been used to detect certain phenomena related to strategic behavior of economic agents that are difficult to detect
  • in economic theory, shape restrictions including monotonicity have been argued to be among the most important implications of economic theory that could be used for identification and estimation

Consider the model

\[Y = f(X) + \varepsilon\]

where $E[\epsilon\mid X] = 0$

many statistics suitable for testing monotonicity may have highly complicated limit distributions

the paper provides bootstrap critical values and proving their validity uniformly over a large class of data generating processes.

literature on testing monotonicity

  • Gijbels et al. (2000) and Ghosal et al. (2000): base on the signs of $(Y_{i+k}-Y_i)(X_{i+k}-X_i)$ and may be inconsistent against models with conditional heteroscedasticity
  • Hall and Heckman (2000): base on the slopes of local linear estimates of $f(\cdot)$. It is contained in the class of the test statistics studied in this paper
    • only established validity of their test for (nonrandom) equidistant $X_i$’s and did not show that their test is adaptive and rate optimal
  • Schlee (1982): does not seem to be practical, see Gijbels et al. (2000)
  • Bowman et al. (1998): known to be inconsistent, see Hall and Heckman (2000).
  • Durot (2003): only establish for the case of (nonrandom) equidistant $X_i’s$ and i.i.d. $\epsilon_i’s$.
  • Baraud et al. (2005): similar to Hall and Heckman (2000), but the validity of test is only established in the homoscedastic Gaussian noise case
  • Lee, Song, and Whang (2017): test a general class of functional inequalities, including regression monotonicity, based on $L_p$ functionals
    • pro: it can be applied not only to the problem of testing regression monotonicity but also to many other problems, like testing monotonicity of nonparametric quantile functions.
    • con: it yields a nonadaptive test
  • Romano and Wolf (2013): assume that $X$ is nonstochastic and discrete, which makes their problem semi-parametric and substantially simplifies proving validity of critical values
    • test the null hypothesis that $f(\cdot)$ is not weakly increasing against the alternative that is weakly increasing
  • Lee, Linton, and Whang (2009) and Delgado and Escanciano (2010): tests of stochastic monotonicity (the conditional cdf of $Y$ given $X$ is weakly decreasing in $x$ for any fixed $y$), which is a related but different problem.

Testing monotonicity is related to but different from the problem of testing conditional moment inequalities, which is concerned with testing the null hypothesis that $f(\cdot)$ is nonnegative against the alternative that there is $x$ such that $f(x) < 0$. It has been extensively studied in the recent economic literature

  • Andrews and Shi, 2010; Chernozhukov et al., 2013; Armstrong, 2014; Armstrong and Chan, 2016; and Chetverikov, 2016 among others

Under the null hypothesis, testing conditional moment inequalities yield the inequalities $E[Y_i\mid X_i]\ge 0$, but the testing monotonicity yield the inequalities

\[E[Y_i-Y_j\mid X_i, X_j, X_i > X_j] \ge 0\,,\]

each of which depends on the pair of observations $(i, j)$.

Tests

Let $Q(\cdot, \cdot):\IR\times \IR\rightarrow \IR$ be a non-negative and symmetric weighting function, so that $Q(x_1,x_2)=Q(x_2, x_1), Q(x_1,x_2)\ge 0$, let

\[b= \ frac 12 \sum_{i, j}(Y_i-Y_j)\sign(X_j - X_i) Q(X_i, X_j)\]

be a test function.

  • Under $H_0$, when the function $f(\cdot)$ is nondecreasing, $E[b]\le 0$.
  • If $H_0$ is violated, there exists a function $Q(\cdot, \cdot)$ such that $E[b] > 0$ if $f(\cdot)$ is smooth.

$b$ can be used to form a test statistic if there is an effective mechanism to find an appropriate weighting function $Q(\cdot, \cdot)$.

use the adaptive testing approach developed in the statistics literature

Choose $Q(\cdot, \cdot)$ from a large set of potentially useful weighting functions that maximizes the studentized version of $b$.

Consider

\[b(s) = \frac 12 \sum_{1\le i,j\le n}(Y_i - Y_j)\sign(X_j-X_i)Q(X_i,X_j,s) = \sum_{i=1}^nY_i\left(\sum_{1\le j\le n}\sign(X_j-X_i)Q(X_i, X_j, s)\right)\]

Conditional on ${X_i}_{1\le i\le n}$, the covariance of $b(s)$ is

\[V(s) = \sum_{1\le i\le n} \sigma_i^2 \left(\sum_{1\le j\le n}\sign(X_j-X_i)Q(X_i, X_j, s)\right)^2\]

where $\sigma_i=(E[\varepsilon_i^2\mid X_i])^{1/2}$ and $\varepsilon_i=Y_i-f(X_i)$. In general, $\sigma_i$’s are unknown, and have to be estimated from the data.

Replace $\sigma_i$ with $\hat\sigma_i$, and get $\hat V(s)$. The general form is

\[T=\max_{s\in \cS_n}\frac{b(s)}{(\hat V(s))^{1/2}}\]

Large values of $T$ indicate that the null hypothesis $H_0$ is violated.

The set $\cS_n$ determines adaptively properties of the test, that is the ability of the test to detect many different deviations from $H_0$.

The downside of the adaptivity, is that expanding the set $\cS_n$ increases the critical value, and thus decreases the power of the test against those alternatives that can be detected by weighting functions already included in $\cS_n$.

$Q$ can be kernel weighting function

\[Q(x_1,x_2,s) = \vert x_1-x_2\vert^k K\left(\frac{x_1-x}{h}\right)K\left(\frac{x_2-x}{h}\right)\]

Published in categories Note