Maximum of a Sample from a Uniform Distribution:
Suppose $X_1, ... , X_n$ is a random sample for a $uniform(0,\theta)$ distribution. Suppose $\theta$ is unknown. An intuitive estimate of $\theta$ is the maximum of a sample. Let $Y_n = max\{X_1, ... , X_n\}$. Exercise 5.1.4 shows that the cdf of $Y_n$ is
$F_{Y_n}(t) = 1$ if $t>\theta$, $F_{Y_n}(t) = \frac{t^n}{\theta^n}$ if $0 < t \leq \theta$, and $F_{Y_n}(t) = 0$ if t is less than or equal to 0.
Hen the pdf of $Y_n$ is $f_{Y_n}(t) = \frac{nt^{n-1}}{\theta^n}$ if $0 < t \leq \theta$ and $f_{Y_n}(t) = 0$ elsewhere.
Based on its pdf, it is easy to show that $E(Y_n) = (n/(n+1))\theta$. Thus, $Y_n$ is a biased estimator $\theta$...Further, based on the cdf of $Y_n$, it is easily seen that $Y_n$ converges to $\theta$ in probability.
MY QUESTION:
How do we know that $Y_n$ converges to $\theta$ in probability? Is it because $E(Y_n) \rightarrow \theta$
Thanks in advnace
via Recent Questions - Mathematics - Stack Exchange http://math.stackexchange.com/questions/288018/convergence-in-probability
0 comments:
Post a Comment