Part of a series on statistics 
Probability theory 

This is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes.
Two events are independent, statistically independent, or stochastically independent^{[1]} if the occurrence of one does not affect the probability of occurrence of the other (equivalently, does not affect the odds). Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other.
When dealing with collections of more than two events, a weak and a strong notion of independence need to be distinguished. The events are called pairwise independent if any two events in the collection are independent of each other, while saying that the events are mutually independent (or collectively independent) intuitively means that each event is independent of any combination of other events in the collection. Similar notions for collections of random variables.
The name "mutual independence" (same as "collective independence") seems the outcome of a pedagogical choice, merely to distinguish the stronger notion from "pairwise independence" which is a weaker notion. In the advanced literature of probability theory, statistics and stochastic processes, the stronger notion is simply named independence with no modifier. It is stronger since independence implies pairwise independence, but not the other way around.
Two events and are independent (often written as or ) if and only if their joint probability equals the product of their probabilities:^{[2]}^{:p. 29}^{[3]}^{:p. 10}


(Eq.1) 
Why this defines independence is made clear by rewriting with conditional probabilities:
and similarly
Thus, the occurrence of does not affect the probability of , and vice versa. Although the derived expressions may seem more intuitive, they are not the preferred definition, as the conditional probabilities may be undefined if or are 0. Furthermore, the preferred definition makes clear by symmetry that when is independent of , is also independent of .
Stated in terms of log probability, two events are independent if and only if the log probability of the joint event is the sum of the log probability of the individual events:
In information theory, negative log probability is interpreted as information content, and thus two events are independent if and only if the information content of the combined event equals the sum of information content of the individual events:
See Information content § Additivity of independent events for details.
Stated in terms of odds, two events are independent if and only if the odds ratio of and is unity (1). Analogously with probability, this is equivalent to the conditional odds being equal to the unconditional odds:
or to the odds of one event, given the other event, being the same as the odds of the event, given the other event not occurring:
The odds ratio can be defined as
or symmetrically for odds of given , and thus is 1 if and only if the events are independent.
A finite set of events is pairwise independent if every pair of events is independent^{[4]}—that is, if and only if for all distinct pairs of indices ,


(Eq.2) 
A finite set of events is mutually independent if every event is independent of any intersection of the other events^{[4]}^{[3]}^{:p. 11}—that is, if and only if for every and for every element subset of events of ,


(Eq.3) 
This is called the multiplication rule for independent events. Note that it is not a single condition involving only the product of all the probabilities of all single events (see below for a counterexample); it must hold true for all subsets of events.
For more than two events, a mutually independent set of events is (by definition) pairwise independent; but the converse is not necessarily true (see below for a counterexample).^{[2]}^{:p. 30}
Two random variables and are independent if and only if (iff) the elements of the πsystem generated by them are independent; that is to say, for every and , the events and are independent events (as defined above in Eq.1). That is, and with cumulative distribution functions and , are independent iff the combined random variable has a joint cumulative distribution function^{[3]}^{:p. 15}


(Eq.4) 
or equivalently, if the probability densities and and the joint probability density exist,
A finite set of random variables is pairwise independent if and only if every pair of random variables is independent. Even if the set of random variables is pairwise independent, it is not necessarily mutually independent as defined next.
A finite set of random variables is mutually independent if and only if for any sequence of numbers , the events are mutually independent events (as defined above in Eq.3). This is equivalent to the following condition on the joint cumulative distribution function . A finite set of random variables is mutually independent if and only if^{[3]}^{:p. 16}


(Eq.5) 
Notice that it is not necessary here to require that the probability distribution factorizes for all possible element subsets as in the case for events. This is not required because e.g. implies .
The measuretheoretically inclined may prefer to substitute events for events in the above definition, where is any Borel set. That definition is exactly equivalent to the one above when the values of the random variables are real numbers. It has the advantage of working also for complexvalued random variables or for random variables taking values in any measurable space (which includes topological spaces endowed by appropriate σalgebras).
Two random vectors and are called independent if^{[5]}^{:p. 187}


(Eq.6) 
where and denote the cumulative distribution functions of and and denotes their joint cumulative distribution function. Independence of and is often denoted by . Written componentwise, and are called independent if
The definition of independence may be extended from random vectors to a stochastic process. Thereby it is required for an independent stochastic process that the random variables obtained by sampling the process at any times are independent random variables for any .^{[6]}^{:p. 163}
Formally, a stochastic process is called independent, if and only if for all and for all


(Eq.7) 
where . Notice that independence of a stochastic process is a property within a stochastic process, not between two stochastic processes.
Independence of two stochastic processes is a property between two stochastic processes and that are defined on the same probability space . Formally, two stochastic processes and are said to be independent if for all and for all , the random vectors and are independent,^{[7]}^{:p. 515} i.e. if


(Eq.8) 
The definitions above (Eq.1 and Eq.2) are both generalized by the following definition of independence for σalgebras. Let be a probability space and let and be two subσalgebras of . and are said to be independent if, whenever and ,
Likewise, a finite family of σalgebras , where is an index set, is said to be independent if and only if
and an infinite family of σalgebras is said to be independent if all its finite subfamilies are independent.
The new definition relates to the previous ones very directly:
Using this definition, it is easy to show that if and are random variables and is constant, then and are independent, since the σalgebra generated by a constant random variable is the trivial σalgebra . Probability zero events cannot affect independence so independence also holds if is only Pralmost surely constant.
Note that an event is independent of itself if and only if
Thus an event is independent of itself if and only if it almost surely occurs or its complement almost surely occurs; this fact is useful when proving zero–one laws.^{[8]}
If and are independent random variables, then the expectation operator has the property
and the covariance is zero, since we have
(The converse of these, i.e. the proposition that if two random variables have a covariance of 0 they must be independent, is not true. See uncorrelated.)
Similarly for two stochastic processes and : If they are independent, then they are uncorrelated.^{[9]}^{:p. 151}
Two random variables and are independent if and only if the characteristic function of the random vector satisfies
In particular the characteristic function of their sum is the product of their marginal characteristic functions:
though the reverse implication is not true. Random variables that satisfy the latter condition are called subindependent.
The event of getting a 6 the first time a die is rolled and the event of getting a 6 the second time are independent. By contrast, the event of getting a 6 the first time a die is rolled and the event that the sum of the numbers seen on the first and second trial is 8 are not independent.
If two cards are drawn with replacement from a deck of cards, the event of drawing a red card on the first trial and that of drawing a red card on the second trial are independent. By contrast, if two cards are drawn without replacement from a deck of cards, the event of drawing a red card on the first trial and that of drawing a red card on the second trial are not independent, because a deck that has had a red card removed has proportionately fewer red cards.
Consider the two probability spaces shown. In both cases, and . The random variables in the first space are pairwise independent because , , and ; but the three random variables are not mutually independent. The random variables in the second space are both pairwise independent and mutually independent. To illustrate the difference, consider conditioning on two events. In the pairwise independent case, although any one event is independent of each of the other two individually, it is not independent of the intersection of the other two:
In the mutually independent case, however,
It is possible to create a threeevent example in which
and yet no two of the three events are pairwise independent (and hence the set of events are not mutually independent).^{[10]} This example shows that mutual independence involves requirements on the products of probabilities of all combinations of events, not just the single events as in this example. For another example, take to be empty and and to be identical events with nonzero probability. Then, since and are the same event, they are not independent, but the probability of the intersection of the events is zero, the product of the probabilities.
The events and are conditionally independent given an event when
.
Intuitively, two random variables and are conditionally independent given if, once is known, the value of does not add any additional information about . For instance, two measurements and of the same underlying quantity are not independent, but they are conditionally independent given (unless the errors in the two measurements are somehow connected).
The formal definition of conditional independence is based on the idea of conditional distributions. If , , and are discrete random variables, then we define and to be conditionally independent given if
for all , and such that . On the other hand, if the random variables are continuous and have a joint probability density function , then and are conditionally independent given if
for all real numbers , and such that .
If discrete and are conditionally independent given , then
for any , and with . That is, the conditional distribution for given and is the same as that given alone. A similar equation holds for the conditional probability density functions in the continuous case.
Independence can be seen as a special kind of conditional independence, since probability can be seen as a kind of conditional probability given no events.