6.2: The reverse relation between confidence interval method and statistical hypothesis testing
Consider an observable ${\mathsf O} = (X, {\cal F} , F){}$ in ${L^\infty (\Omega)}$. Let $\Theta$ be a locally compact space (called the second state space) which has the semi-metric $d^x_{\Theta}$ $(\forall x \in X)$ such that,
$(\sharp):$ | for each $x\in X$, the map $d^x_{\Theta}: \Theta^2 \to [0,\infty)$ satisfies (i):$d^x_\Theta (\theta, \theta )=0$, (ii):$d^x_\Theta (\theta_1, \theta_2 )$ $=d^x_\Theta (\theta_2, \theta_1 )$, (ii):$d^x_\Theta (\theta_1, \theta_3 )$ $\le d^x_\Theta (\theta_1, \theta_2 ) + d^x_\Theta (\theta_2, \theta_3 ) $. |
Furthermore, consider two maps $E:X \to \Theta$ and $\pi: \Omega \to \Theta$. Here, $E:X \to \Theta$ and $\pi: \Omega \to \Theta$ is respectively called an estimator and a system quantity

$(A):$ | the probability, that the measured value $x$ obtained by the measurement ${\mathsf M}_{L^\infty (\Omega)} \big({\mathsf O}:= (X, {\cal F} , F) ,$ $ S_{[\omega_0 {}] } \big)$ satisfies the following condition (6.10), is more than or equal to ${1 -\alpha }$ (e.g., ${1 -\alpha }= 0.95$). |

Remark 6.4 [(B$_1$):The meaning of confidence interval]. Consider the parallel measurement $\bigotimes_{j=1}^J {\mathsf M}_{L^\infty (\Omega)} \big({\mathsf O}:= (X, {\cal F} , F) ,$ $ S_{[\omega_0 {}] } \big)$, and assume that a measured value $x=(x_1,x_2, \ldots , x_J)( \in X^J)$ is obtained by the parallel measurement. Recall the formula (6.12). Then, it surely holds that
\begin{align} \lim_{J \to \infty } \frac{\mbox{Num} [\{ j \;|\; D_{x_j}^{{1 -\alpha, \Theta }} \ni \pi( \omega_0)]}{J} \ge {1 -\alpha } (= 0.95) \tag{6.13} \end{align}where $\mbox{Num} [A]$ is the number of the elements of the set $A$. Hence Theorem 6.3 can be tested by numerical analysis (with random number). Similarly, Theorem 6.5 ( mentioned later ) can be tested.
[(B$_2$)] Also, note that \begin{align} (6.9) & = \delta_\omega^{1-\alpha} = \inf \{ \delta > 0: [F(\{ x \in X \;:\; d^x_\Theta ( E(x) , \pi( \omega ) ) < \delta \} )](\omega ) \ge {1-\alpha} \} \nonumber \\ &= \inf \{ \eta > 0: [F(\{ x \in X \;:\; d^x_\Theta ( E(x) , \pi( \omega ) ) \ge \eta \} )](\omega ) \le \alpha \} \tag{6.14} \end{align}
6.2.2 Statistical hypothesis testing
Next, we will explain the statistical hypothesis testing, which is characterized as the reverse of the confident interval method.

$(C):$ | the probability, that the measured value $x$ obtained by the measurement ${\mathsf M}_{L^\infty (\Omega)} \big({\mathsf O}:= (X, {\cal F} , F) ,$ $ S_{[\omega_0 {}] } \big)$ satisfies the following condition (6.16), is less than or equal to $\alpha$ (e.g., $\alpha= 0.05$). \begin{align} d^x_\Theta (E(x), \pi(\omega_0)) \ge {\eta }^\alpha_{\omega_0} . \tag{6.16} \end{align} |
$(D):$ | the probability, that the measured value $x$ obtained by the measurement ${\mathsf M}_{L^\infty (\Omega)} \big({\mathsf O}:= (X, {\cal F} , F) ,$ $ S_{[\omega_0 {}] } \big)$ $($where $\pi(\omega_0) \in H_N )$ satisfies the following condition (6.18), is less than or equal to $\alpha$ (e.g., $\alpha= 0.05$). |

Let $0 < \alpha \ll 1$. Consider an observable ${\mathsf O} = (X, {\cal F} , F){}$ in ${L^\infty (\Omega)}$, and the second state space $\Theta$ (i.e., locally compact space with a semi-metric $d_\Theta^x (x \in X)$ ). And consider the estimator $E:X \to \Theta$ and the system quantity $\pi: \Omega \to \Theta$. Define $\delta_\omega^{1-\alpha}$ by (6.9), and define $\eta_\omega^{\alpha}$ by (6.15) ( and thus, $\delta_\omega^{1-\alpha}= \eta_\omega^{\alpha}$).
$(E):$ | [Confidence interval method]. for each $x \in X$,define $(1- \alpha)$-confidence interval by
\begin{align}
&
D_{x}^{1- \alpha, \Theta }
=
\{
\pi(\omega)
(\in
\Theta)
:
d^x_\Theta (E(x),
\pi(\omega )
)
<
\delta^{1- \alpha}_{\omega }
\}
\tag{6.19}
\end{align}
\begin{align}
&
D_{x}^{1- \alpha, \Omega}
=
\{
\omega
(\in
\Omega)
:
d^x_\Theta (E(x),
\pi(\omega )
)
<
\delta^{1- \alpha}_{\omega }
\}
\tag{6.20}
\end{align}
Here, assume that a measured value $x (\in X)$ is obtained by the measurement ${\mathsf M}_{L^\infty (\Omega)} \big({\mathsf O}:= (X, {\cal F} , F) ,$ $ S_{[\omega_0 {}] } \big)$. Then, we see that
|
$(F):$ | [statistical hypothesis testing]. Consider the null hypothesis $H_N ( \subseteq \Theta )$. Assume that the state $\omega_0(\in \Omega )$ satisfies:
\begin{align}
\pi(\omega_0)
\in
H_N
( \subseteq \Theta )
\end{align}
Here, put,
\begin{align}
&
{\widehat R}_{{H_N}}^{\alpha; \Theta}
=
\bigcap_{\omega \in \Omega \mbox{ such that }
\pi(\omega) \in {H_N}}
\{
E({x})
(\in
\Theta)
:
d^x_\Theta (E(x),
\pi(\omega )
)
\ge
\eta^\alpha_{\omega }
\}.
\\
&
\tag{6.21}
\end{align}
\begin{align}
&
{\widehat R}_{{H_N}}^{\alpha; X}
=
E^{-1}(
{\widehat R}_{{H_N}}^{\alpha; \Theta})
=
\bigcap_{\omega \in \Omega \mbox{ such that }
\pi(\omega) \in {H_N}}
\{
x
(\in
X)
:
d^x_\Theta (E(x),
\pi(\omega )
)
\ge
\eta^\alpha_{\omega }
\}.
\\
&
\tag{6.22}
\end{align}
which is called the $(\alpha)$-rejection region of the null hypothesis ${H_N}$.
Assume that a measured value $x (\in X)$ is obtained by the measurement ${\mathsf M}_{L^\infty (\Omega)} \big({\mathsf O}:= (X, {\cal F} , F) ,$ $ S_{[\omega_0 {}] } \big)$. Then, we see that
|