Central limit Theorem and unbiased estimators


By Anonymous on Wednesday, January 17, 2001 - 08:13 am :

Please help!

I'm an A level student studying stats and I'm getting confused with unbiased estimators and the Central Limit Theorem. If my understanding is correct by CLT the variance of the distribution of the sample means is σ2 /n (where n is sample size and σ2 is the pop. variance). In which case, why can't you say that a good estimator for the population variance (when you are given the sample variance, say σ2 ) is simply n multiplied by σ2 ? Rather than (n/n-1) multiplied by σ2 .
Please help. Am I confusing two different ideas?

John Morgan.
By Dave Sheridan (Dms22) on Wednesday, January 17, 2001 - 12:39 pm :
It's not quite as simple as that. By "good" you really mean ''unbiased'' and there is some disagreement about whether these two concepts are the same!

If I say x is an unbiased estimator of a then the expected value of x is a. Now, if you work out the expected value of σ2 you'll find that it's (n-1) σ2 . So your estimator, σ2 /n, has expected value (n-1) σ2 /n and thus is not an unbiased estimator of σ2 . For very large n, the difference will be very small, but that's not the point. If you want an unbiased estimator, you need to divide by n-1, not n.

Does that make sense?

-Dave


By Anonymous on Wednesday, January 17, 2001 - 02:48 pm :

Thanks Dave,

I think that makes some sense. Why can't a good estimator be unbiased though? And in the central limit theorem when one says the 'variance of the distribution of the sample means' is σ2 /n, can that idea not be exploited the other way? Even if it is not about estimators, what does it tell you? Or is the difference simply about distribution of sample means and a single sample?

I don't know if i'm making myself clear or getting into a bigger muddle! Can I apply (n/n-1) σ2 to sample means?
Thanks
John Morgan


By Dave Sheridan (Dms22) on Thursday, January 18, 2001 - 11:36 am :

A good estimator can be unbiased and vice versa, but it doesn't always happen. Generally, if I want to estimate a quantity I knew to be very large, I'd think it somewhat odd if my estimator had mean zero. So it makes sense to think that demanding an estimator to be unbiased is a good thing. However, in an example like the sample variance, we have something where the mean is correct asymptotically but not quite there for any finite n. Should this necessarily be bad?

Difficult question. Statistics doesn't really have many rights and wrongs, just people's view of what is a good idea and what's bad. There are helpful theorems which tell us for sure that some things won't work, but out of the things which will work, knowing which is best is quite tough.

As for your question, I'm not quite sure what you're asking. The reason we use n/(n-1) to multiply the sample variance is that this produces an unbiased estimator, and we don't need to do this to the sample mean because it's already unbiased. Is that what you meant?

-Dave