7.1: Zero way ANOVA (= Student $t$-distribution )

In the previous chapter, we introduced the statistical hypothesis testing for student $t$-distribution, which is characterized as "zero" way ANOVA (analysis of variance ). In this section, we review "zero" way ANOVA (analysis of variance ).

Consider the classical basic structure

\begin{align} [ C_0(\Omega ) \subseteq L^\infty (\Omega, \nu ) \subseteq B(L^2 (\Omega, \nu ))] \end{align} where \begin{align} \Omega = {\mathbb R} \times {\mathbb R}_+ = \{ (\mu, \sigma ) \;|\; \mu \mbox{ is real,} \sigma \mbox{ is positive real} \} \end{align}

Consider the simultaneous normal measurement ${\mathsf M}_{L^\infty ({\mathbb R} \times {\mathbb R}_+)}$ $({\mathsf O}_G^n = ({\mathbb R}^n, {\mathcal B}_{\mathbb R}^n, {{{G}}^n}) ,$ $S_{[(\mu, \sigma)]})$ ( in $L^\infty({\mathbb R} \times {\mathbb R}_+)$). For completeness, recall that

\begin{align} & [{{{G}}}^n ({\mathop{\mbox{\Large $\times$}}}_{k=1}^n \Xi_k)] (\omega) = {\mathop{\mbox{\Large $\times$}}}_{k=1}^n [{{{G}}}(\Xi_k)](\omega) \nonumber \\ = & \frac{1}{({{\sqrt{2 \pi }\sigma{}}})^n} \underset{{{\mathop{\mbox{\Large $\times$}}}_{k=1}^n \Xi_k }}{\int \cdots \int} \exp[{}- \frac{\sum_{k=1}^n ({}{x_k} - {}{\mu} )^2 } {2 \sigma^2} {}] d {}{x_1} d {}{x_2}\cdots dx_n \tag{7.1} \\ & \qquad (\forall \Xi_k \in {\cal B}_{{\mathbb R}{}}^{} (k=1,2,\ldots, n), \quad \forall {}{\omega}=(\mu, \sigma ) \in \Omega = {\mathbb R}\times {\mathbb R}_+). \nonumber \end{align}

And recall the state space $\Omega = {\mathbb R} \times {\mathbb R}_+$, the measured value space $X={\mathbb R}^n$, the second state space(=parameter space) $\Theta={\mathbb R}$. Also, recall the estimator $E:X(={\mathbb R}^n) \to \Theta(={\mathbb R})$ defined by

\begin{align} E(x)=E(x_1, x_2, \ldots , x_n ) = \overline{\mu}(x) = \frac{x_1 + x_2 + \cdots + x_n}{n} \tag{7.2} \end{align}

and the system quantity $\pi:\Omega(={\mathbb R} \times {\mathbb R}_+) \to \Theta(={\mathbb R})$ defined by

\begin{align} \Omega(={\mathbb R} \times {\mathbb R}_+) \ni \omega = (\mu, \sigma ) \mapsto \pi (\mu, \sigma ) = \mu \in \Theta(={\mathbb R}) \tag{7.3} \end{align}

The essence of "studentized" is to define the semi-metric $d_\Theta^x (\forall x \in X)$ in the second state space $\Theta (={\mathbb R})$such that

\begin{align} d_\Theta^x (\theta^{(1)}, \theta^{(2)}) = \frac{|\theta^{(1)}-\theta^{(2)}|}{\sqrt{n}{\overline{\sigma}(x)}} = \frac{|\theta^{(1)}-\theta^{(2)}|}{\sqrt{\overline{SS}(x)}} \quad \qquad (\forall x \in X={\mathbb R}^n, \forall \theta^{(1)}, \theta^{(2)} \in \Theta={\mathbb R} ) \tag{7.4} \end{align}

where

\begin{align} {{\overline{SS}}} (x) = {{\overline{SS}}} (x_1,x_2,\ldots , x_n ) = {\sum_{k=1}^n ( x_k - \overline{\mu} (x))^2} \quad( \forall x=(x_1,x_2,\ldots , x_n ) \in {\mathbb R}^n) \nonumber \end{align}

Thus, as mentioned in the previous chapter, our problem is characterized as follows.

Problem 7.1 [The zero-way ANOVA]. Consider the simultaneous normal measurement ${\mathsf M}_{L^\infty ({\mathbb R} \times {\mathbb R}_+)}$　$({\mathsf O}_G^n = ({\mathbb R}^n, {\mathcal B}_{\mathbb R}^n, {{{G}}^n}) ,$　$S_{[(\mu, \sigma)]})$.　Here, assume that \begin{align} \mu = \mu_0 \end{align}

That is, the null hypothesis $H_N$　is defined by　$H_N=\{ \mu_0 \}$　$(\subseteq \Theta= {\mathbb R} ) )$.　Consider $0 < \alpha \ll 1$.

Then, find the largest ${\widehat R}_{{H_N}}^{\alpha; \Theta}( \subseteq \Theta)$ (independent of $\sigma$) such that

 $(A_1):$ the probability that a measured value $x(\in{\mathbb R}^n )$ (obtained by ${\mathsf M}_{L^\infty ({\mathbb R} \times {\mathbb R}_+ )} ({\mathsf O}_G^{{{n}}} = (X(\equiv {\mathbb R}^{{{n}}}), {\mathcal B}_{\mathbb R}^{{{n}}}, {{{G}}^{{{n}}}} ), S_{[(\mu_0, \sigma )]} )$) satisfies \begin{align} E(x) \in {\widehat R}_{{H_N}}^{\alpha; \Theta} \tag{7.5} \end{align} is less than $\alpha$.

Answer. We see, for any $\omega=(\mu_0, \sigma ) (\in \Omega= {\mathbb R} \times {\mathbb R}_+ )$,

\begin{align} & [G^n(\{ x \in X \;:\; d^x_\Theta ( E(x) , \pi( \omega ) ) \ge \eta \} )](\omega ) \nonumber \\ =& [G^n(\{ x \in X \;:\; \frac{ |\overline{\mu}(x)- \mu_0 |}{ {{{\sqrt{\overline{SS}(x)}}}} } \ge \eta \} )](\omega ) \nonumber \\ = & \frac{1}{({{\sqrt{2 \pi }\sigma{}}})^n} \underset{ \eta \sqrt{n-1} \le \frac{ |\overline{\mu}(x)- \mu_0 |}{ {\sqrt{\overline{SS}(x)}} /\sqrt{n-1} } }{\int \cdots \int} \exp[{}- \frac{\sum_{k=1}^n ({}{x_k} - {}{\mu_0} )^2 } {2 \sigma^2} {}] d {}{x_1} d {}{x_2}\cdots dx_n \nonumber \\ = & \frac{1}{({{\sqrt{2 \pi }{}}})^n} \underset{ \eta^2 n({n-1}) \le \frac{ n(\overline{\mu}(x))^2 }{ {\overline{SS}(x)}/({n-1}) } } {\int \cdots \int} \exp[{}- \frac{\sum_{k=1}^n ({}{x_k} {} )^2 } {2 } {}] d {}{x_1} d {}{x_2}\cdots dx_n \tag{7.6} \end{align}

$(A_2)$: by the formula of Gauss integrals ( Formula 7.8 (A)( in $\S$7.4)), we see

\begin{align} = & \int^{\infty}_{ \eta^2 n({n-1}) } p_{(1,{{n}}-1) }^F(t) dt = \alpha \;\; (\mbox{ e.g., $\alpha=0.05$}) \tag{7.7} \end{align}

where $p_{(1,{{n}}-1) }^F$ is the probability density function of $F$-distribution with $(1,n-1)$ degree of freedom.

Note that the probability density function $p_{(n_1,n_2)}^F(t)$ of $F$-distribution with $(n_1,n_2)$ degree of freedom is defined by

\begin{align} p_{(n_1,n_2)}^F(t) = \frac{1}{B(n_1/2, n_2/2)} \Big(\frac{n_1}{n_2} \Big)^{n_1/2} \frac{t^{(n_1-2)/2}}{(1+n_1t/n_2)^{(n_1+n_2)/2}} \qquad (t \ge 0) \tag{7.8} \end{align}

where $B(\cdot, \cdot)$ is the Beta function.

The $\alpha$-point: $F_{n_1, \alpha}^{n_2}$ $( > 0)$ is defined by

\begin{align} \int^{\infty}_{F_{n_1, \alpha}^{n_2} } p_{(n_1,n_2) }^F (t) dt =\alpha \qquad (0 < \alpha \ll 1. \mbox{ e.g., } \alpha=0.05) \tag{7.9} \end{align}

Thus, it suffices to solve the following equation:

\begin{align} {\eta^2 n({{n}}-1) }{ } ={F_{n-1, \alpha}^{1} } \tag{7.10} \end{align}

Therefore,

\begin{align} (\eta^\alpha_{\omega})^2 = \frac{{F_{n-1, \alpha}^{1} }}{n(n-1)} \tag{7.11} \end{align}

Then, the rejection region${\widehat R}_{{H_N}}^{\alpha; \Theta}$( (or ${\widehat R}_{H_N}^{\alpha; X}$) is calculated as

\begin{align} {\widehat R}_{{H_N}}^{\alpha; \Theta} & = \bigcap_{\omega =(\mu, \sigma ) \in \Omega (={\mathbb R} \times {\mathbb R}_+) \mbox{ such that } \pi(\omega)= \mu \in {H_N}(=\{\mu_0\})} \{ E(x) (\in \Theta) : \;\; d^x_\Theta (E(x), \pi(\omega ) ) \ge \eta^\alpha_{\omega } \} \nonumber \\ & = \{\overline{\mu}(x) \in \Theta(={\mathbb R}) \;:\; \frac{ |\overline{\mu}(x)- \mu_0 |}{ \sqrt{{\overline{SS}(x)}} } \ge \eta_\omega^\alpha \} = \{\overline{\mu}(x) \in \Theta(={\mathbb R}) \;:\; \frac{ |\overline{\mu}(x)- \mu_0 |}{ \overline{\sigma}(x) } \ge \eta_\omega^\alpha \sqrt{n} \} \nonumber \\ & = \Big\{\overline{\mu}(x) \in \Theta(={\mathbb R}) \;:\; \frac{ |\overline{\mu}(x)- \mu_0 |}{ \overline{\sigma}(x) } \ge \sqrt{\frac{F_{n-1, \alpha}^1}{n-1}} \;\; \Big\} \nonumber \\ & = \Big\{\overline{\mu}(x) \in \Theta(={\mathbb R}) \;:\; \mu_0 \le \overline{\mu}(x) - {{\overline{\sigma}(x)}} \sqrt{\frac{F_{n-1, \alpha}^1}{n-1}} \mbox{ or } \overline{\mu}(x) + {{\overline{\sigma}(x)}} \sqrt{\frac{F_{n-1, \alpha}^1}{n-1}} \le \mu_0 \Big\} \tag{7.12} \end{align}

and,

\begin{align} {\widehat R}_{H_N}^{\alpha; X} &= E^{-1}({\widehat R}_{{H_N}}^{\alpha; \Theta}) \nonumber \\ & = \Big\{x \in X(={\mathbb R}^n) \;:\; \mu_0 \le \overline{\mu}(x) - {{\overline{\sigma}(x)}} \sqrt{\frac{F_{n-1, \alpha}^1}{n-1}} \mbox{ or } \overline{\mu}(x) + {{\overline{\sigma}(x)}} \sqrt{\frac{F_{n-1, \alpha}^1}{n-1}} \le \mu_0 \Big\} \tag{7.13} \end{align}
QED
$\fbox{Note 7.1}$(i): It should be noted that the mathematical part in the above argument is only the (A$_2$).
(ii): Also, note that
 $(\sharp):$ $\;\;\; F$-distribution with $(1,n-1)$ degree of freedom = the student $t$-distribution with $(n-1)$ degree of freedom

Thus, we conclude that