Please help!
I'm an A level student studying stats and I'm getting confused with unbiased estimators and the Central Limit Theorem. If my understanding is correct by CLT the variance of the distribution of the sample means is (where is sample size and is the pop. variance). In which case, why can't you say that a good estimator for the population variance (when you are given the sample variance, say ) is simply multiplied by ? Rather than multiplied by .Thanks Dave,
I think that makes some sense. Why can't a good estimator be unbiased though? And in the central limit theorem when one says the 'variance of the distribution of the sample means' is , can that idea not be exploited the other way? Even if it is not about estimators, what does it tell you? Or is the difference simply about distribution of sample means and a single sample? I don't know if i'm making myself clear or getting into a bigger muddle! Can I apply to sample means?A good estimator can be unbiased and
vice versa, but it doesn't always happen. Generally, if I want to
estimate a quantity I knew to be very large, I'd think it
somewhat odd if my estimator had mean zero. So it makes sense to
think that demanding an estimator to be unbiased is a good thing.
However, in an example like the sample variance, we have
something where the mean is correct asymptotically but not quite there for any
finite n. Should this necessarily be bad?
Difficult question. Statistics doesn't really have many rights
and wrongs, just people's view of what is a good idea and what's
bad. There are helpful theorems which tell us for sure that some
things won't work, but out of the things which will work, knowing
which is best is quite tough.
As for your question, I'm not quite sure what you're asking. The
reason we use n/(n-1) to multiply the sample variance is that
this produces an unbiased estimator, and we don't need to do this
to the sample mean because it's already unbiased. Is that what
you meant?
-Dave