Edwin
Chan
|
| Posted on Tuesday, 17
June, 2003 - 05:58 am: |
|
Informally, independence is an assumption that
the occurrence of one event (
) has no effect on
the probability of occurrence of another event
(
) and vice verse, i.e.
and
, the important corollary of this
is that the JOINT probability of
and
(i.e. the probability of the event
AND
)
may be simply expressed as:
.
A binomial experiment (Bin(
,
)) is made
up of
repeated independent Bernoulli
experiments (also called binary trials) all with
the same probability (
) of success. The
binomial events (random variables) are the total
number of successes in
trials (ranging from
zero to
). It is precisely because the
constituent Bernoulli trials are assumed to be
independent that we can write down the binomial
probability function as a simple product of
the Bernoulli probabilities.
When
becomes very large and
very small,
computing binomial probabilities becomes very
tedious (because of the factorials and powers)
so a useful approximation to the binomial
distribution is the Poisson with parameter
.
The assumption of independence carries over because
of the mathematical link with the binomial
(proved in any statistics text). Independence is
also at the root of the ''memoryless'' property
of the Poisson process.
|