next up previous
Link to Statistics and Actuarial Science main page
Next: Assignment 11 Up: 22S:194 Statistical Inference II Previous: Assignment 10

Solutions

10.3
a.
The derivative of the log likelihood is

$\displaystyle \frac{\partial}{\partial\theta} \left(\frac{n}{2}\log \theta - \frac{\sum(X_i-\theta)^2}{2\theta}\right)$ $\displaystyle = -\frac{n}{2\theta} + \frac{\sum(X_i-\theta)^2}{1\theta^2} + \frac{2\sum(X_i-\theta)}{2\theta}$    
  $\displaystyle = -\frac{n}{2\theta} + \frac{\sum X_i^2 - n \theta^2}{2\theta^2} = n \frac{W - \theta - \theta^2}{2\theta^2}$    

So the MLE is a root of the quadratic equation $ \theta^2+\theta-W = 0$. The roots are

$\displaystyle \theta_{1,2} = \frac{1}{2} \pm \sqrt{\frac{1}{4} + W}$    

The MLE is the larger root since that represents a local maximum and since the smaller root is negative.
b.
The Fisher information is

$\displaystyle I_n(\theta)$ $\displaystyle = -nE\left[\frac{\partial}{\partial\theta}\left(\frac{\theta^2 + \theta - W} {2\theta^2}\right)\right]$    
  $\displaystyle = \frac{n E[W]}{\theta^3} - \frac{n}{2\theta^2} = n \frac{E[W] - \theta/2}{\theta^3}$    
  $\displaystyle = n \frac{\theta^2+\theta-\theta/2}{\theta^3} = n \frac{\theta+1/2}{\theta^2}$    

So $ \widehat{\theta} \sim$   AN$ (\theta, \frac{\theta^2}{n(\theta + 1/2)})$.

10.9
(but only for $ e^{-\lambda}$; do not do $ \lambda
e^{-\lambda}$)

The UMVUE of $ e^{-\lambda}$ is $ V_n = (1-1/n)^{n\overline{X}}$ and the MLE is $ e^{-\overline{X}}$. Since

$\displaystyle \sqrt{n}(V_n - e^{-\overline{X}}) = \sqrt{n}O(1/n) = O(1/\sqrt{n})$    

both $ \sqrt{n}(V_n-e^{-\lambda})$ and $ \sqrt{n}(e^{\overline{X}}-e^{-\lambda})$ have the same normal limiting distribution and therefore their ARE is one.

In finite samples one can argue that the UMVUE should be preferred if unbiasedness is deemed important. The MLE is always larger than the UMVUE in this case, which might in some contexts be an argument for using the UMVUE. A comparison of mean square errors might be useful.

For the data provided, $ e^{-\overline{X}} = 0.0009747$ and $ V_n = 0.0007653$.

Problem: Find the approximate joint distribution of the maximum likelihood estimators in problem 7.14 of the text.

Solution: The log-likelihood is

$\displaystyle \ell(\lambda,\mu) = -\sum w_{i} \log \lambda -(n-\sum w_{i})\log \mu -\sum z_{i}\left(\frac{1}{\lambda}+\frac{1}{\mu}\right)$    

So

$\displaystyle \frac{\partial}{\partial\lambda}\ell(\lambda,\mu)$ $\displaystyle = -\frac{\sum w_{i}}{\lambda}+\frac{\sum z_{i}}{\lambda^{2}}$    
$\displaystyle \frac{\partial}{\partial\mu}\ell(\lambda,\mu)$ $\displaystyle = -\frac{n-\sum w_{i}}{\mu}+\frac{\sum z_{i}}{\mu^{2}}$    
$\displaystyle \frac{\partial^{2}}{\partial\lambda^{2}}\ell(\lambda,\mu)$ $\displaystyle = \frac{\sum w_{i}}{\lambda^{2}}-\frac{2\sum z_{i}}{\lambda^{3}}$    
$\displaystyle \frac{\partial^{2}}{\partial\mu^{2}}\ell(\lambda,\mu)$ $\displaystyle = \frac{n-\sum w_{i}}{\mu^{2}}-\frac{2\sum z_{i}}{\mu^{3}}$    
$\displaystyle \frac{\partial^{2}}{\partial\lambda\partial\mu}\ell(\lambda,\mu)$ $\displaystyle = 0$    
$\displaystyle E[W_{i}]$ $\displaystyle = \frac{\mu}{\lambda+\mu}$    
$\displaystyle E[Z_{i}]$ $\displaystyle = \frac{\lambda\mu}{\lambda+\mu}$    

So

$\displaystyle E\left[-\frac{\partial^{2}}{\partial\lambda^{2}}\ell(\lambda,\mu)\right]$ $\displaystyle = 2 \frac{n}{\lambda^{2}}\frac{\mu}{\lambda+\mu} - \frac{n}{\lambda^{2}}\frac{\mu}{\lambda+\mu} = \frac{n}{\lambda^{2}}\frac{\mu}{\lambda+\mu}$    
$\displaystyle E\left[-\frac{\partial^{2}}{\partial\mu^{2}}\ell(\lambda,\mu)\right]$ $\displaystyle = \frac{n}{\mu^{2}}\frac{\lambda}{\lambda+\mu}$    

and thus

$\displaystyle \begin{bmatrix}\widehat{\lambda}\\ \widehat{\mu} \end{bmatrix} \sim$   AN$\displaystyle \left( \begin{bmatrix}\lambda\\ \mu \end{bmatrix}, \begin{bmatrix...
...u)}{n\mu} & 0\\ 0 & \frac{\mu^{2}(\lambda+\mu)}{n\lambda} \end{bmatrix} \right)$    

Problem: In the setting of problem 7.14 of the text, suppose $ n =
100$, $ \sum W_{i} = 71$, and $ \sum Z_{i} = 7802$. Also assume a smooth, vague prior distribution. Find the posterior probability that $ \lambda > 100$.

Solution: The MLE's are

$\displaystyle \widehat{\lambda}$ $\displaystyle = \frac{\sum Z_{i}}{\sum W_{i}} = 109.89$    
$\displaystyle \widehat{\mu}$ $\displaystyle = \frac{\sum Z_{i}}{n-\sum W_{i}}$    

The observed information is

$\displaystyle \widehat{I}_{n}(\widehat{\lambda},\widehat{\mu}) = \begin{bmatrix...
...{\lambda}^{2}} & 0\\ 0 & \frac{n - \sum W_{i}}{\widehat{\mu}^{2}} \end{bmatrix}$    

Thus the marginal posterior distribution of $ \lambda$ is approximately

$\displaystyle N(\widehat{\lambda}, \widehat{\lambda}^{2}/\sum W_{i}) = N(109.89, 170.07)$    

So

$\displaystyle P(\lambda > 100\vert X) \approx P\left(Z > \frac{100-109.89}{\sqrt{170.07}}\right) \approx 0.7758$    


next up previous
Link to Statistics and Actuarial Science main page
Next: Assignment 11 Up: 22S:194 Statistical Inference II Previous: Assignment 10
Luke Tierney 2003-05-04