*This resource is part of the collection Probability and Evidence.*

This problem considers an alternative way in which understanding the evidence is important, here considering medical tests.

*Alternatively, click below to read a description.*

The correct use of probability and statistics is fundamental to various applications, one of which is medical testing.

The ELISA test, an indicator for HIV, is a good example of this. There is a 1% chance that if you are HIV negative, you will get a positive result. However, this does not mean that if you get a positive result, then you have a 1% chance of being HIV negative.

The ELISA (Enzyme Linked Immunosorbent Assay) tests can be used to detect whether someone is HIV positive. These tests are cheap and easy to administer, but they are not always accurate.

In particular, for someone without HIV, there is a 1% chance that the test will record a positive result, called a *false-positive*.

**Why is this not the same as saying "a positive result means there is a 99% chance of being infected"?**

In low-risk groups, the rate of infection is approximately 1 in 10,000.

Virtually all people with HIV record a positive result: the probability of a *false-negative* result is negligible.

**How could you use this new information to calculate the probabilty that someone who gets a positive result has HIV?**

Are there any tables or diagrams that might help you represent the information?

Can you consider what you might expect to happen to 10,000 random people?

When you have thought about these questions, click below for some suggestions:

You could draw a two-way (contingency) table like this:

Positive Test Result | Negative Test Result | Total | |
---|---|---|---|

Person is HIV Positive | |||

Person is HIV Negative | |||

Total | 10 000 |

**Does this result surprise you?
Why is this test useful, despite the number of false-positives it produces?**