next up previous
Link to Statistics and Actuarial Science main page
Next: Assignment 5 Up: 22S:194 Statistical Inference II Previous: Assignment 4

Solutions

7.44
$ {X}_{1},\ldots,{X}_{n}$ are $ i.i.d.$ $ N(\theta,1)$. $ W=\overline{X}-1/n$ has

$\displaystyle E[W] = \theta^{2}+\frac{1}{n}-\frac{1}{n} = \theta^{2}$    

Since $ \overline{X}$ is sufficient and complete, $ W$ is the UMVUE of $ \theta$. The CRLB is

$\displaystyle \frac{(2\theta)^{2}}{I_{n}(\theta)} = \frac{(2\theta)^{2}}{n} = \frac{4\theta^{2}}{n}$    

Now

$\displaystyle E[\overline{X}^{2}]$ $\displaystyle = \theta^{2}+\frac{1}{n}$    
$\displaystyle E[\overline{X}^{4}]$ $\displaystyle = E[(\theta+X/\sqrt{n})^{4}]$    
  $\displaystyle = \theta^{4}+4\theta^{3}\frac{1}{\sqrt{n}}E[Z] + 6\theta^{2}\frac{1}{n}E[Z^{2}]+4\theta\frac{1}{n^{3/2}}E[Z^{3}] + \frac{1}{n^{2}}E[Z^{4}]$    
  $\displaystyle = \theta^{4} + 6\theta^{2}\frac{1}{n} + \frac{3}{n^{2}}$    

So

Var$\displaystyle (W)$ $\displaystyle =$   Var$\displaystyle (\overline{X}^{2}) = \theta^{4} + 6 \theta^{2}\frac{1}{n} + \frac{3}{n^{2}} - \theta^{4}-\frac{1}{n^{2}}-\frac{2}{n}\theta^{2}$    
  $\displaystyle = \frac{4}{n}\theta^{2} + \frac{2}{n^{2}} > \frac{4}{n}\theta^{2}$    

7.48
a.
The MLE $ \widehat{p} = \frac{1}{n}\sum X_{i}$ has variance Var$ (\widehat{p}) = \frac{p(1-p)}{n}$. The information is

$\displaystyle I_{n}(p)$ $\displaystyle = -E\left[\frac{\partial^{2}}{\partial\theta^{2}} \left(\sum X_{i}\log p + \left(n-\sum X_{i}\right)\log(1-p)\right)\right]$    
  $\displaystyle = -E\left[\frac{\partial}{\partial\theta}\left(\frac{\sum X_{i}}{p}-\frac{n-\sum X_{i}}{1-p}\right)\right]$    
  $\displaystyle = - \left(-\frac{np}{p^{2}}-\frac{n-np}{(1-p)^{2}}\right) = \frac{n}{p}+\frac{n}{1-p} = \frac{n}{p(1-p)}$    

So the CRLB is $ \frac{p(1-p)}{n}$.
b.
$ E[X_{1}X_{2}X_{3}X_{4}]=p^{4}$. $ \sum X_{i}$ is sufficient and complete.

$\displaystyle E[W\vert\sum X_{i} = t]$ $\displaystyle = P(X_{1}=X_{2}=X_{3}=X_{4}=1\vert\sum X_{i}=t)$    
  $\displaystyle = \begin{cases}0 & t < 4\\ \frac{P(X_{1}=X_{2}=X_{3}=X_{4}=1,\sum_{5}^{n}X_{i}=t-4)}{P(\sum_{1}^{n} X_{i}=t)} & t \ge 4 \end{cases}$    
  $\displaystyle = \begin{cases}0 & t < 4\\ \frac{p^{4}\binom{n-4}{t-4}p^{t-4}(1-p)^{n-t}}{\binom{n}{t}p^{t}(1-p){n-t}} & t \ge 4 \end{cases}$    
  $\displaystyle = \frac{t(t-1)(t-2)(t-3)}{n(n-1)(n-2)(n-3)}$    

So the UMVUE is (for $ n \ge 4$)

$\displaystyle \frac{\widehat{p}(\widehat{p}-1/n)(\widehat{p}-2/n)(\widehat{p}-3/n)}{(1-1/n)(1-2/n)(1-3/n)}$    

No unbiased estimator eists for $ n < 4$.

7.62
a.

$\displaystyle R(\theta,a\overline{X}+b)$ $\displaystyle = E_{\theta}[(a\overline{X}+b - \theta)^{2}]$    
  $\displaystyle = a^{2}$Var$\displaystyle (\overline{X}) + (a\theta+b-\theta)^{2}$    
  $\displaystyle = a^{2}\frac{\sigma^{2}}{n}+(b-(1-a)\theta)^{2}$    

b.
For $ \eta = \frac{\sigma^{2}}{n\tau^{2}+\sigma^{2}}$,

$\displaystyle \delta_{\pi} = E[\theta\vert X] = (1-\eta)\overline{X}+\eta\mu$    

So

$\displaystyle R(\theta,\delta_{\pi})$ $\displaystyle = (1-\eta)^{2}\frac{\sigma^{2}}{n}+(\eta\mu-\eta\theta)^{2} = (1-\eta)^{2}\frac{\sigma^{2}}{n}+\eta^{2}(\mu-\theta)^{2}$    
  $\displaystyle = \eta(1-\eta)\tau^{2}+\eta^{2}(\mu-\theta)^{2}$    

c.

$\displaystyle B(\pi,\delta_{\pi})$ $\displaystyle = E[E[(\theta-\delta_{\pi})^{2}\vert X]]$    
  $\displaystyle = E[E[(\theta-E[\theta\vert X])^{2}\vert X]]$    
  $\displaystyle = E[$Var$\displaystyle (\theta\vert X)] = E\left[\frac{\sigma^{2}\tau^{2}}{\sigma^{2}+n\tau^{2}}\right] = \frac{\sigma^{2}\tau^{2}}{\sigma^{2}+n\tau^{2}} = \eta\tau^{2}$    

7.63
From the previous problem, when the prior mean is zero the risk of the Bayes rule is

$\displaystyle R(\theta, \delta^\pi) = \frac{\tau^4+\theta^2}{(1+\tau^2)^2}$    

So for $ \tau^2 = 1$

$\displaystyle R(\theta, \delta^\pi) = \frac{1}{4} + \frac{1}{4} \theta^2$    

and for $ \tau^2 = 10$

$\displaystyle R(\theta, \delta^\pi) = \frac{100}{121} + \frac{1}{121} \theta^2$    

With a smaller $ \tau^2$ the risk is lower near the prior mean and higher far from the prior mean.

7.64
For any $ a = (a_1,\dots,a_n)$

$\displaystyle E[\sum L(\theta_i,a_i) \vert X=x] = \sum E[L(\theta_i,a_i)\vert X = x]$    

The independence assumptions imply that $ (\theta_1,X_i)$ is independent of $ \{X_j: j \neq i\}$ and therefore.

$\displaystyle E[L(\theta_i,a_i)\vert X = x] = E[L(\theta_i,a_i)\vert X_i = x_i]$    

for each $ i$. Since $ \delta^{\pi_i}$ is a Bayes rule for estimating $ \theta_i$ with loss $ L(\theta_i,a_i)$ we have

$\displaystyle E[L(\theta_i,a_i)\vert X_i = x_i] \ge E[L(\theta_i,\delta^{\pi_i}(X_i))\vert X_i = x_i] = E[L(\theta_i,\delta^{\pi_i}(X_i))\vert X = x]$    

with the final equality again following from the independence assumptions. So

$\displaystyle \sum E[L(\theta_i,a_i)\vert X_i = x_i]$ $\displaystyle \ge \sum E[L(\theta_i,\delta^{\pi_i}(X_i))\vert X = x]$    
  $\displaystyle = E[\sum L(\theta_i,\delta^{\pi_i}(X_i))\vert X = x]$    

and therefore

$\displaystyle E[\sum L(\theta_i,a_i)\vert X = x] \ge E[\sum L(\theta_i,a_i)\vert X_i = x_i]$    

for all $ a$, which implies that $ \delta^\pi(X)=(\delta^{\pi_1}(X_1),\dots,\delta^{\pi_n}(X_n))$ is a Bayes rule for estimating $ \theta=(\theta_1,\dots,\theta_n)$ with loss $ \sum L(\theta_i,a_i)$ and prior $ \prod \pi_i(\theta_i)$.


next up previous
Link to Statistics and Actuarial Science main page
Next: Assignment 5 Up: 22S:194 Statistical Inference II Previous: Assignment 4
Luke Tierney 2003-05-04