# Union-intersection tests and Intersection-union tests

##### Posted on Dec 02, 2019 (Update: Dec 03, 2019)

This post is based on section 8.3 of Casella and Berger (2001).

In some situations, tests for complicated null hypotheses can be developed from tests for simpler null hypothesis.

## Union-Intersection Method

The union-intersection method would be useful if we can write the null hypothesis as

where $\Gamma$ is an arbitrary index set that may be finite or infinite. Suppose the rejection region for the test

is $\{x:T_\gamma(x)\in R_\gamma\}$. Then the rejection region for the union-intersection test is

$$\label{eq:8.2.4} \bigcup_{\gamma\in \Gamma} \{x:T_\gamma(x)\in R_\gamma\}\,.$$

In particular, suppose that each of the individual tests has a rejection region of the form $\{x:T_\gamma(x)> c\}$, where $c$ does not depend on $\gamma$, then \eqref{eq:8.2.4} becomes

which implies that the test statistic for testing $H_0$ is $T(x) = \sup_{\gamma\in \Gamma}T_\gamma(x)$.

## Intersection-Union Method

Suppose we wish to test the null hypothesis

then the rejection region for the intersection-union test of $H_0$ versus $H_1$ is

Again, the test can be greatly simplified if the rejection regions for the individual hypotheses are all of the form $\{x:T_\gamma(x)\ge c\}$. In such cases, the rejection region for $H_0$ is

and hence the test statistic is $\inf_{\gamma\in\Gamma}T_\gamma(x)$.

## Likelihood Ratio Test

The likelihood ratio test statistic for testing $H_0:\theta\in \Theta_0$ versus $H_1:\theta\in \Theta_0^c$ is

A likelihood ratio test is any test that has a rejection region of the form $\{x:\lambda(x)\le c\}$, where $c$ is any number satisfying $0\le c\le 1$.

## Sizes of UIT and IUT

Due to the way in which they are constructed, the sizes of UIT and IUT can often be bounded above by the sizes of other tests. Such bounds are useful if a level $\alpha$ test is wanted, but the size of the UIT or IUT is too difficult to evaluate.

Consider testing $H_0:\theta\in \Theta_0$ versus $H_1:\theta\in \Theta^c_0$, where $\Theta_0 = \bigcap_{\gamma\in \Gamma}\Theta_\gamma$ and $\lambda_\gamma(x)$ is the LRT statistic for testing $H_{0\gamma}$. Define $T(x)=\inf_{\gamma\in\Gamma}\lambda_\gamma(x)$, and form the UIT with rejection region

Also, consider the usual LRT with rejection region $\{x:\lambda(x) < c\}$. Then

• $T(x)\ge \lambda(x)$ for every $x$
• If $\beta_T(\theta)$ and $\beta_\lambda(\theta)$ are the power functions for the tests based on $T$ and $\lambda$, respectively, then $\beta_T(\theta) \le \beta_\lambda(\theta)$ for every $\theta\in \Theta$.
• If the LRT is a level $\alpha$ test, then the UIT is a level $\alpha$ test.

For IUT, we have

Let $\alpha_\gamma$ be the size of the test of $H_{0\gamma}$ with rejection region $R_\gamma$. Then the IUT with rejection region $R=\bigcup_{\gamma\in\Gamma}R_\gamma$ is a level $\alpha = \sup_{\gamma\in \Gamma}\alpha_\gamma$ test.

It provides an upper bound for the size of an IUT, is somewhat more useful than the theorem for UIT, which applies only to UITs constructed from likelihood ratio tests.

Actually, the size of the IUT may be much less than $\alpha$, and the following theorem gives conditions under which the size of the IUT is exactly $\alpha$ and the IUT is not too conservative.

Consider testing $H_0:\theta\in \bigcup_{j=1}^k\Theta_j$, where $k$ is a finite positive integer. For each $j = 1,\ldots,k$, let $R_j$ be the rejection region of a level $\alpha$ test of $H_{0j}$. Suppose that for some $i=1,\ldots,k$, there exists a sequence of parameter points, $\theta_l\in \Theta_i, l=1,2,\ldots,$, such that

• $\lim_{l\rightarrow \infty}P_{\theta_l}(X\in R_i) = \alpha$
• for each $j = 1,\ldots, k, j\neq i$, $\lim_{l\rightarrow\infty} P_{\theta_l}(X\in R_j) = 1$ Then the IUT with rejection region $R = \bigcup_{j=1}^k R_j$ is a size $\alpha$ test.

Only need to show $\sup_{\theta\in \Theta_0}P_\theta(X\in R)\ge \alpha$. Because all the parameter points $\theta_l$ satisfy $\theta_l\in \Theta_i\subset \Theta_0$, % where the Bonferroni’s inequality says $P(\bigcup_{i=1}^nE_i)\le \sum_{i=1}^n P(E_i)\,.$

## Example: Acceptance sampling

Two parameters that are important in assessing the quality of upholstery fabric (家具装饰织物) are

• $\theta_1$: the mean breaking strength
• $\theta_2$: the probability of passing a flammability (易燃性) test

Standards may dictate that $\theta_1$ should be over 50 pounds and $\theta_2$ should be over .95, and the fabric is acceptable only if it meets both of these standards. This can be modeled with the hypothesis test

where a batch of material is acceptable only if $H_1$ is accepted.

Suppose $X_1,\ldots, X_n$ are measurements of breaking strength for $n$ samples and are assumed to be iid $N(\theta_1,\sigma^2)$. The LRT of $H_{01}:\theta_1\le 50$ will reject $H_{01}$ if $(\bar X-50)/(S/\sqrt n) > t$. Suppose that we also have the results of $m$ flammability tests, denoted by $Y_1,\ldots,Y_m$, where $Y_i=1$ if the $i$-th sample passes the test. If $Y_1,\ldots,Y_m$ are modeled as iid Bernoulli($\theta_2$) random variables, the LRT will reject $H_{02}:\theta_2\le .95$ if $\sum_{i=1}^mY_i > b$. Putting all of this together, the rejection region for the intersection-union test is given by

Let $n = m = 58, t=1.672, b=57$, then each of the individual tests has size $\alpha = .05$ (approximately). Therefore, the IUT is a level $\alpha=0.05$ test. In fact, this test is a size $\alpha = 0.05$ test. Consider a sequence of parameter point $\theta_l = (\theta_{1l}, \theta_2)$, with $\theta_{1l}\rightarrow \infty$ as $l\rightarrow \infty$ and $\theta_2 = .95$. Also, $P_{\theta_l}(X\in R_1)\rightarrow 1$ as $\theta_{1l}\rightarrow \infty$, while $P_{\theta_l}(X\in R_2)=0.05$ for all $l$ because $\theta_2=0.95$. Thus, the IUT is a size $\alpha$ test.

Published in categories Note