Hypothetical shorts

Some short statements about hypothesis testing: are they true, false, or somewhere in between?

Problem

Are the following statements about hypothesis testing true or false?

Give convincing reasons why that is the case.  If a statement is false, can you give an example to show why?  And if so, is there some sense in which the statement is "usually" true, but there are just a few special cases where it is false, or is it "usually" false?

If you have not met p-values before, you could look at the article What is a Hypothesis Test?

  1. A significance level of 5% means that there is a 5% probability of getting a test statistic in the critical region if the null hypothesis is true.
  2. A significance level of 5% means that there is a 5% probability of the null hypothesis being true if the test statistic lies in the critical region.
  3. The p-value of an experiment gives the probability of the null hypothesis being true.
  4. If the p-value is less than 0.05, then the alternative hypothesis is true.
  5. If the p-value is less than 0.05, then the alternative hypothesis is more likely to be true than the null hypothesis.
  6. The closer the p-value is to 1, the greater the probability that the null hypothesis is true.
  7. If we have a larger sample size, we will get a more reliable result from the hypothesis test.
  8. If we repeat an experiment and we get a p-value less than 0.05 in either experiment, then we must reject the null hypothesis.
  9. If we do not get a significant result from our experiment, we should go on increasing our sample size until we do.

The XKCD cartoon Significant provides a nice illustration of the idea in question 8.



This resource is part of the collection Statistics - Maths of Real Life