This page uses content from Wikipedia and is licensed under CC BY-SA.

# Working–Hotelling procedure

In statistics, particularly regression analysis, the Working–Hotelling procedure, named after Holbrook Working and Harold Hotelling, is a method of simultaneous estimation in linear regression models. One of the first developments in simultaneous inference, it was devised by Working and Hotelling for the simple linear regression model in 1929.[1] It provides a confidence region for multiple mean responses, that is, it gives the upper and lower bounds of more than one value of a dependent variable at several levels of the independent variables at a certain confidence level. The resulting confidence bands are known as the Working–Hotelling–Scheffé confidence bands.

Like the closely related Scheffé's method in the analysis of variance, which considers all possible contrasts, the Working–Hotelling procedure considers all possible values of the independent variables; that is, in a particular regression model, the probability that all the Working–Hotelling confidence intervals cover the true value of the mean response is the confidence coefficient. As such, when only a small subset of the possible values of the independent variable is considered, it is more conservative and yields wider intervals than competitors like the Bonferroni correction at the same level of confidence. It outperforms the Bonferroni correction as more values are considered.

## Statement

### Simple linear regression

Consider a simple linear regression model ${\displaystyle Y=\beta _{0}+\beta _{1}X+\varepsilon }$, where ${\displaystyle Y}$ is the response variable and ${\displaystyle X}$ the explanatory variable, and let ${\displaystyle b_{0}}$ and ${\displaystyle b_{1}}$ be the least-squares estimates of ${\displaystyle \beta _{0}}$ and ${\displaystyle \beta _{1}}$ respectively. Then the least-squares estimate of the mean response ${\displaystyle E(Y_{i})}$ at the level ${\displaystyle X=x_{i}}$ is ${\displaystyle {\hat {Y_{i}}}=b_{0}+b_{1}x_{i}}$. It can then be shown, assuming that the errors independently and identically follow the normal distribution, that an ${\displaystyle 1-\alpha }$ confidence interval of the mean response at a certain level of ${\displaystyle X}$ is as follows:

${\displaystyle {\hat {y}}_{i}\in \left[b_{0}+b_{1}x_{i}\pm t_{\alpha /2,{\text{df}}=n-2}{\sqrt {\left({\frac {1}{n-2}}\sum _{j=1}^{n}e_{i}^{\,2}\right)\cdot \left({\frac {1}{n}}+{\frac {(x_{i}-{\bar {x}})^{2}}{\sum _{j=1}^{n}(x_{j}-{\bar {x}})^{2}}}\right)}}\right],}$

where ${\displaystyle \left({\frac {1}{n-2}}\sum _{j=1}^{n}e_{j}i^{\,2}\right)}$ is the mean squared error and ${\displaystyle t_{\alpha /2,{\text{df}}=n-2}}$ denotes the upper ${\displaystyle {\frac {\alpha }{2}}^{\text{th}}}$ percentile of Student's t-distribution with ${\displaystyle n-2}$ degrees of freedom.

However, as multiple mean responses are estimated, the confidence level declines rapidly. To fix the confidence coefficient at ${\displaystyle 1-\alpha }$, the Working–Hotelling approach employs an F-statistic:[2][3]

${\displaystyle {\hat {y}}_{i}\in \left[b_{0}+b_{1}x_{i}\pm W{\sqrt {\left({\frac {1}{n-2}}\sum _{j=1}^{n}e_{j}^{\,2}\right)\cdot \left({\frac {1}{n}}+{\frac {(x_{i}-{\bar {x}})^{2}}{\sum _{j=1}^{n}(x_{j}-{\bar {x}})^{2}}}\right)}}\right],}$

where ${\displaystyle W^{2}=2F_{\alpha ,{\text{df}}=(2,n-2)}}$ and ${\displaystyle F}$ denotes the upper ${\displaystyle \alpha ^{\text{th}}}$ percentile of the F-distribution with ${\displaystyle (2,n-2)}$ degrees of freedom. The confidence level of is ${\displaystyle 1-\alpha }$ over all values of ${\displaystyle X}$, i.e. ${\displaystyle x_{i}\in \mathbb {R} }$.

### Multiple linear regression

The Working–Hotelling confidence bands can be easily generalised to multiple linear regression. Consider a general linear model as defined in the linear regressions article, that is,

${\displaystyle \mathbf {Y} =\mathbf {X} {\boldsymbol {\beta }}+{\boldsymbol {\varepsilon }},\,}$

where

${\displaystyle \mathbf {Y} ={\begin{pmatrix}Y_{1}\\Y_{2}\\\vdots \\Y_{n}\end{pmatrix}},\quad \mathbf {X} ={\begin{pmatrix}\mathbf {x} _{1}^{\rm {T}}\\\mathbf {x} _{2}^{\rm {T}}\\\vdots \\\mathbf {x} _{n}^{\rm {T}}\end{pmatrix}}={\begin{pmatrix}x_{11}&\cdots &x_{1p}\\x_{21}&\cdots &x_{2p}\\\vdots &\ddots &\vdots \\x_{n1}&\cdots &x_{np}\end{pmatrix}},{\boldsymbol {\beta }}={\begin{pmatrix}\beta _{1}\\\beta _{2}\\\vdots \\\beta _{p}\end{pmatrix}},\quad {\boldsymbol {\varepsilon }}={\begin{pmatrix}\varepsilon _{1}\\\varepsilon _{2}\\\vdots \\\varepsilon _{n}\end{pmatrix}}.}$

Again, it can be shown that the least-squares estimate of the mean response ${\displaystyle E(Y_{i})=\mathbf {x} _{i}^{\rm {T}}{\boldsymbol {\beta }}}$ is ${\displaystyle {\hat {Y}}_{i}=\mathbf {x} _{i}^{\rm {T}}\mathbf {b} }$, where ${\displaystyle \mathbf {b} }$ consists of least-square estimates of the entries in ${\displaystyle {\boldsymbol {\beta }}}$, i.e. ${\displaystyle \mathbf {b} =(\mathbf {X} ^{\rm {T}}\mathbf {X} )^{-1}\mathbf {X} ^{\rm {T}}\mathbf {Y} }$. Likewise, it can be shown that a ${\displaystyle 1-\alpha }$ confidence interval for a single mean response estimate is as follows:[4]

${\displaystyle {\hat {y}}_{i}\in \left[\mathbf {x} _{i}^{\rm {T}}\mathbf {b} \pm t_{\alpha /2,{\text{df}}=n-p}{\sqrt {\operatorname {MSE} (\mathbf {x} _{i}^{\rm {T}}(\mathbf {X} ^{\rm {T}}\mathbf {X} )^{-1}\mathbf {x} _{i}}})\right],}$

where ${\displaystyle \operatorname {MSE} }$ is the observed value of the mean squared error ${\displaystyle (Y^{\rm {T}}Y-\mathbf {b} ^{\rm {T}}X^{\rm {T}}Y)}$.

The Working–Hotelling approach to multiple estimations is similar to that of simple linear regression, with only a change in the degrees of freedom:[3]

${\displaystyle {\hat {y}}_{i}\in \left[\mathbf {x} _{i}^{\rm {T}}\mathbf {b} \pm W{\sqrt {\operatorname {MSE} (\mathbf {x} _{i}^{\rm {T}}(\mathbf {X} ^{\rm {T}}\mathbf {X} )^{-1}\mathbf {x} _{i}}})\right],}$

where ${\displaystyle W^{2}=2F_{\alpha ,{\text{df}}=(p,n-p)}}$.

## Graphical representation

In the simple linear regression case, Working–Hotelling–Scheffé confidence bands, drawn by connecting the upper and lower limits of the mean response at every level, take the shape of hyperbolas. In drawing, they are sometimes approximated by the Graybill–Bowden confidence bands, which are linear and hence easier to graph:[2]

${\displaystyle \beta _{0}+\beta _{1}(x_{i}-{\bar {x}})\in \left[b_{0}+b_{1}(x_{i}-{\bar {x}})\pm m_{\alpha ,2,{\text{df}}=n-2}\cdot \left({\frac {1}{\sqrt {n}}}+{\frac {|x_{i}-{\bar {x}}|}{\sqrt {\sum _{j=1}^{n}(x_{j}-{\bar {x}})}}}\right)\right]}$

where ${\displaystyle m_{\alpha ,2,{\text{df}}=n-2}}$denotes the upper ${\displaystyle \alpha ^{\text{th}}}$ percentile of the Studentized maximum modulus distribution with two means and ${\displaystyle n-2}$ degrees of freedom.

The simple linear regression model with a Working–Hotelling confidence band.

## Numerical example

The same data in ordinary least squares are utilised in this example:

 Height (m) Weight (kg) 1.47 1.5 1.52 1.55 1.57 1.6 1.63 1.65 1.68 1.7 1.73 1.75 1.78 1.8 1.83 52.21 53.12 54.48 55.84 57.2 58.57 59.93 61.29 63.11 64.47 66.28 68.1 69.92 72.19 74.46

A simple linear regression model is fit to this data. The values of ${\displaystyle b_{0}}$ and ${\displaystyle b_{1}}$ have been found to be −39.06 and 61.27 respectively. The goal is to estimate the mean mass of women given their heights at the 95% confidence level. The value of ${\displaystyle W^{2}}$ was found to be ${\displaystyle F_{0.95,{\text{df}}=(2,15-2)}=2.758828}$. It was also found that ${\displaystyle {\bar {x}}=1.651}$, ${\displaystyle \sum _{j=1}^{n}e_{j}^{\,2}=7.490558}$, ${\displaystyle \operatorname {MSE} =0.5761968}$ and ${\displaystyle \sum _{j=1}^{n}(x_{j}-{\bar {x}})^{2}=693.3726}$. Then, to predict the mean mass of all women of a particular height, the following Working–Hotelling–Scheffé band has been derived:

${\displaystyle {\hat {y}}_{i}\in \left[-39.06+61.27x_{i}\pm {\sqrt {2.758828\cdot 0.5761968\cdot \left({\frac {1}{15}}+{\frac {(x_{i}-1.651)^{2}}{693.3726}}\right)}}\right],}$

which results in the graph on the left.

## Comparison with other methods

Bonferroni bands for the same linear regression model, based on estimating the response variable given the observed values of X. The confidence bands are noticeably tighter.

The Working–Hotelling approach may give tighter or looser confidence limits compared to the Bonferroni correction. In general, for small families of statements, the Bonferroni bounds may be tighter, but when the number of estimated values increases, the Working–Hotelling procedure will yield narrower limits. This is because the confidence level of Working–Hotelling–Scheffé bounds is exactly ${\displaystyle 1-\alpha }$ when all values of the independent variables, i.e. ${\displaystyle x_{i}\in \mathbb {R} }$, are considered. Alternatively, from an algebraic perspective, the critical value ${\displaystyle \pm {\sqrt {W}}}$ remains constant as the number estimats of increases, whereas the corresponding values in Bonferonni estimates, ${\displaystyle \pm t_{1-\alpha /g,{\text{df}}=n-p}}$, will be increasingly divergent as the number ${\displaystyle g}$ of estimates increases. Therefore, the Working–Hotelling method is more suited for large-scale comparisons, whereas Bonferroni is preferred if only a few mean responses are to be estimated. In practice, both methods are usually used first and the narrower interval chosen.[4]

Another alternative to the Working–Hotelling–Scheffé band is the Gavarian band, which is used when a confidence band is needed that maintains equal widths at all levels.[5]

The Working–Hotelling procedure is based on the same principles as Scheffé's method, which gives family confidence intervals for all possible contrasts.[6] Their proofs are almost identical.[5] This is because both methods estimate linear combinations of mean response at all factor levels. However, the Working–Hotelling procedure does not deal with contrasts but with different levels of the independent variable, so there is no requirement that the coefficients of the parameters sum up to zero. Therefore, it has one more degree of freedom.[6]

## Footnotes

1. ^ Miller (1966), p. 1
2. ^ a b Miller (2014)
3. ^ a b Neter, Wasserman and Kutner, pp. 163–165
4. ^ a b Neter, Wasserman and Kutner, pp. 244–245
5. ^ a b Miller (1966), pp. 123–127
6. ^ a b Westfall, Tobias and Wolfinger, pp. 277–280