# Probability/Probability Spaces

## Concept

We will now proceed to develop a more axiomatic theory of probability, allowing for a simpler mathematical formalism. We shall proceed by developing the concept of a probability space, which will allow us to harness many theorems in mathematical analysis.

Recall that an experiment is any action or process with an outcome that is subject to uncertainty or randomness. A probability space or a probability triple is a mathematical construct that models an experiment and its set of possible outcomes.

## Probability space

Before defining probability space, we define several terms used in its definition.

Definition. (Sample space) The sample space, denoted by ${\displaystyle \Omega }$ , is the non-empty set whose elements are all possible outcomes of an experiment.

Remark.

• The sample space is often not unique, since there are often multiple ways to define the possible outcomes of an experiment, possibly because of the difference in expression [1].
• An outcome from the experiment is commonly denoted by ${\displaystyle \omega }$  (small letter of ${\displaystyle \Omega }$ , omega).

Example. A sample space of the numbers coming up from rolling a six-faced dice is ${\displaystyle \Omega =\{1,2,3,4,5,6\}}$ .

Definition. (Event) An event is a subset of the sample space.

Remark.

• It follows that the event space ${\displaystyle {\mathcal {F}}}$ , which is a set consisting all events (or family of events), is the power set of sample space, i.e. ${\displaystyle {\mathcal {F}}={\mathcal {P}}(\Omega )}$ .
• Event consisting a single outcome (which is a singleton) is sometimes referred as simple event, and event consisting more than one outcomes is sometimes referred as compound event.
• An event is said to have happened or occurred if the outcome of the experiment is an element of the event.

Example. Sets ${\displaystyle \varnothing ,\{1,2,3\}}$  and ${\displaystyle \Omega }$  are events from rolling a six-faced dice, while the set ${\displaystyle \{0\}}$  is not.

Definition. (Probability space) A probability space is a mathematical triplet ${\displaystyle (\Omega ,{\mathcal {F}},\mathbb {P} )}$  consisting of the sample space ${\displaystyle \Omega }$ , event space ${\displaystyle {\mathcal {F}}}$ , and a probability function ${\displaystyle \mathbb {P} }$ .

Remark.

• There are multiple ways to define the probability functions, as we will see in the following sections, and among those definitions, the axiomatic definition is the most used, and general.
• The probability function is sometimes denoted by ${\displaystyle \Pr }$ , ${\displaystyle P}$  or ${\displaystyle p}$  instead.
• The notation ${\displaystyle \mathbb {P} }$  is mainly used in this book to distinguish the probability function from other functions named ${\displaystyle P}$  or ${\displaystyle p}$ .
• A probability space is arbitrary, in the sense that its author ultimately defines which elements ${\displaystyle \Omega }$ , ${\displaystyle {\mathcal {F}}}$ , and ${\displaystyle \mathbb {P} }$  will contain.
• The probability function ${\displaystyle \mathbb {P} }$  may present a model for a particular class of real-world situations.

## Terminologies

Terminologies of set from set theory also apply to event, since event is essentially a set. Apart from those terminologies, we also have the following extra terminologies for event.

Definition. (Exhaustive) Events ${\displaystyle E_{1},\dotsc ,E_{n}}$  are exhaustive if ${\displaystyle E_{1}\cup \dotsb \cup E_{n}=\Omega }$ .

Example. When we are rolling a six-faced dice, and we are considering the number coming up as the outcome, the events ${\displaystyle \{1,2,3,4\}}$  and ${\displaystyle \{3,4,5,6\}}$  are exhaustive, while the events ${\displaystyle \varnothing }$  and ${\displaystyle \{1,2,3,4,5\}}$  are not exhaustive.

Definition. (Partition) A group of events ${\displaystyle E_{1},\dotsc ,E_{n}}$  is a partition of ${\displaystyle \Omega }$  if the events are both disjoint and exhaustive.

Example. When we are rolling a six-faced dice, and we are considering the number coming up as the outcome, the group of events ${\displaystyle \varnothing }$  and ${\displaystyle \Omega }$  is a partition, while the group of events ${\displaystyle \{1,2,{\color {maroon}3,4}\}}$  and ${\displaystyle \{{\color {maroon}3,4},5,6\}}$  is not a partition, since these events are not disjoint.

## Probability definition

The remaining undefined item in the probability space is the probability function ${\displaystyle \mathbb {P} }$ , and we will give various definitions of it, in which the combinatorial (or classical), and axiomatic definitions are important.

Definition. (Subjective probability) The probability of an event is a measure of the chance with which we can expect the event to occur. We assign a number between 0 and 1 inclusively to the probability of an event. A probability of 1 means that we are certain the event will occur, and a probability of 0 means that we are certain the event will not occur.

Example. Amy and Bob access their probabilities of winning the top prize from a lucky draw using the subjective probability approach.

• Amy thinks that she is lucky, and thus assign 0.7 to the probability of winning the top prize.
• Bob thinks that he is unlucky, and thus assign 0.1 to the probability of winning the top prize.

Remark.

• This illustrates a major problem of subjective probability, namely the probability assigned to an event is often not unique, due to different opinions from different people.

Definition. (Combinatorial probability) Assume all outcomes in the sample space ${\displaystyle \Omega }$  are equally likely. Then, the (combinatorial) probability of an event (say ${\displaystyle E}$ ) in the sample space is ${\displaystyle \mathbb {P} (E)=\#(E)/\#(\Omega )}$ .

Remark.

• It is also called classical probability.
• If the outcomes are not equally likely, we cannot apply this definition.
• By principle of indifference (or insufficient reason), unless there exists evidence showing that the outcomes are not equally likely [2], we should assume that the outcomes are equally likely.
• When the sample space contains infinitely many outcomes, the combinatorial probability is undefined.

Example. The probability of getting the number 1 coming up from rolling a fair red six-faced dice and a fair blue six-faced dice is ${\displaystyle 1/(6\cdot 6)=1/36}$ .

Proof. The number of pair of numbers coming up for the two dices is ${\displaystyle \underbrace {6} _{\text{red}}\times \underbrace {6} _{\text{blue}}=36}$ . Since the dice is fair, the 36 outcomes are equally likely, and so we can apply combinatorial probability here.

${\displaystyle \Box }$

Exercise.

Suppose the blue dice is colored red. Calculate the probability again.

 1/36 1/21 1/18 1/15 1/6

Example. (Capture-mark-recapture) We are fishing in a lake, containing ${\displaystyle N}$  fishes. First, we catch ${\displaystyle k\leq n}$  fishes from the lake (capture), and gave them each a marker (mark). Then, we catch fishes from the lake again (recapture), and catch ${\displaystyle n\geq r}$  (and also ${\displaystyle \leq N}$ ) fishes this time. The probability that there is ${\displaystyle r\leq k}$  marked fishes in the ${\displaystyle n}$  fishes is ${\displaystyle {\binom {k}{r}}\times {\binom {N-k}{n-r}}{\bigg /}{\binom {N}{n}}}$ .

Proof. We order the ${\displaystyle N}$  fishes in the lake notionally (e.g. by assigning them different number one by one), so that they are now distinguishable (notionally), then, we have:

• ${\displaystyle {\binom {N}{n}}}$ : the number of outcomes of catching ${\displaystyle n}$  fishes from ${\displaystyle N}$  fishes;
• ${\displaystyle {\binom {k}{r}}}$ : the number of outcomes of catching ${\displaystyle r}$  marked fishes from ${\displaystyle k}$  marked fishes in the recapture process;
• ${\displaystyle {\binom {N-k}{n-r}}}$ : the number of outcomes of catching ${\displaystyle n-r}$  unmarked fishes from ${\displaystyle N-k}$  unmarked fishes in the recapture process (this ensure that we only catch ${\displaystyle r}$  marked fishes, by ensuring that the remaining caught fishes do not contain any marked fish).

${\displaystyle \Box }$

Exercise. There are 9 balls in a box, consisting of 3 red balls, 2 blue balls and 4 green balls.

1 Calculate the probability that a red ball is drawn from the box if 1 ball is drawn from the box.

 1/28 3/28 1/9 1/3 None of the above.

2 Calculate the probability that 2 red balls and 3 green balls are drawn from the box if 6 balls are drawn from the box.

 2/7 5/9 5/7 5/6 None of the above.

3 ${\displaystyle n}$  orange balls are added to the box such that the probability that 2 red balls and 3 green balls are drawn from the box if 6 balls are drawn from the box is now ${\displaystyle 1/3}$ . Calculate ${\displaystyle n}$ .

 2 4 8 16 None of the above.

4 Select the correct (in numerical value sense) expression(s) of the probability that ${\displaystyle r}$  red balls are drawn and ${\displaystyle b}$  blue balls are drawn if ${\displaystyle k}$  balls are drawn from the box (${\displaystyle r}$ , ${\displaystyle b}$  and ${\displaystyle k}$  are of values such that all terms in the following are defined).

 ${\displaystyle {\binom {3}{r}}{\binom {2}{b}}{\bigg /}{\binom {9}{k}}}$ ${\displaystyle {\binom {3}{r}}{\binom {2}{b}}{\bigg /}{\binom {b+r+k}{k}}}$ ${\displaystyle {\binom {3}{r}}{\binom {2}{b}}{\bigg /}{\binom {9}{9-b-r}}}$ ${\displaystyle {\binom {9-b-k}{r}}{\binom {2}{b}}{\bigg /}{\binom {9}{k}}}$ ${\displaystyle {\binom {3}{r}}{\binom {9-r-k}{b}}{\bigg /}{\binom {9}{k}}}$

Definition. (Frequentist probability) The probability of an event or outcome is the long-term proportion of times the event would occur if the experiment was repeated independently many times. That is, letting ${\displaystyle n(E)}$  be the no. of times that event ${\displaystyle E}$  occurs from ${\displaystyle n}$  repetitions of experiment, then the probability of ${\displaystyle E}$  is ${\displaystyle \mathbb {P} (E)=\lim _{n\to \infty }{\frac {n(E)}{n}}.}$

Remark.

• When the no. of repetitions is large enough, the ratio of the no. of times that event ${\displaystyle E}$  occurs from these repetitions to the no. of repetitions can be used to approximate ${\displaystyle \mathbb {P} (E)}$ .

Example. Suppose we throw a coin 1 million times (i.e. 1000000 times). The number of head coming up is 700102, the number of tail coming up is 299896, and the number of times that the coin lands on edge is 2.

Then, the probability that the head coming up is close to ${\displaystyle 700102/1000000=0.700102}$ .

After that, we may think that the coin is unfair [3].

Definition. (Axiomatic probability) A probability is a set function defined on the event space ${\displaystyle {\mathcal {F}}}$ . It assigns a real value ${\displaystyle \mathbb {P} (E)}$  to each event ${\displaystyle E}$ , with the following probability axioms satisified:

(P1) for each event ${\displaystyle E\in {\mathcal {F}}}$ , ${\displaystyle \mathbb {P} (E)\geq 0}$  (nonnegativity);
(P2) ${\displaystyle \mathbb {P} (\Omega )=1}$  (unitarity);
(P3) for each (countable) infinite sequence of mutually exclusive (or disjoint) events ${\displaystyle E_{1},E_{2},\dotsc }$ , ${\displaystyle \mathbb {P} \left(\bigcup _{i=1}^{\infty }E_{i}\right)=\sum _{i=1}^{\infty }\mathbb {P} (E_{i})}$  (countable additivity).

Example. Based on the probability axioms, the probability of an event is impossible to be -0.1.

Example. (Combinatorial probability is probability) Combinatorial probability is a probability since it satisfies all three probability axioms.

Proof.

(P1) It follows from observing that the no. of outcomes is nonnegative;
(P2) It follows from observing that the no. of outcomes in the event (which is a subset of sample space) cannot be larger than the no. of outcomes in the sample space;
(P3) It follows from observing that the no. of outcomes in union of (infinite) disjoint sets is the same as the sum of no. of outcomes in each of the (infinite) disjoint sets (possibly through the Venn diagram, non-rigorously).

${\displaystyle \Box }$

With these three axioms only, we can prove many well-known properties of probability.

## Properties of probability

### Basic properties of probability

Proposition. (Probability of empty set) ${\displaystyle \mathbb {P} (\varnothing )=0}$ .

Proof. Let ${\displaystyle E_{i}=\varnothing }$  for each positive integer ${\displaystyle i}$ . ${\displaystyle E_{1},E_{2},\dotsc }$  are mutually exclusive, since they are all empty sets, and the intersection of each two of them is also empty set. Also, ${\displaystyle E_{1}\cup E_{2}\cup \dotsb =\varnothing \cup \varnothing \cup \dotsb =\varnothing }$ . So,

{\displaystyle {\begin{aligned}&&\mathbb {P} (\varnothing )&=\mathbb {P} (E_{1}\cup E_{2}\cup \dotsb )\\&&&{\overset {\text{ P3 }}{=}}\mathbb {P} (E_{1})+\mathbb {P} (E_{2})+\dotsb \\&\Rightarrow &\underbrace {\mathbb {P} (\varnothing )-\mathbb {P} (E_{1})} _{0}&=\mathbb {P} (E_{1})+\mathbb {P} (E_{2})+\dotsb \\&\Rightarrow &\mathbb {P} (E_{2})+\dotsb &=0\\&\Rightarrow &\mathbb {P} (E_{2})&\leq \mathbb {P} (E_{2})+\dotsb =0.\end{aligned}}}

By P1, ${\displaystyle \mathbb {P} (E_{2})\geq 0}$ . It follows that from these two inequalities that ${\displaystyle \mathbb {P} (\varnothing )=\mathbb {P} (E_{2})=0}$ .

${\displaystyle \Box }$

Proposition. (Extended P3) The property of probability in the third axiom of probability (P3) is also valid for a finite sequence of events.

Proof. For each positive integer ${\displaystyle k}$ , suppose that ${\displaystyle A_{1},\dotsc ,A_{k}}$  are disjoint events, and append to these the infinite sequence of events ${\displaystyle A_{k+1}=\varnothing ,A_{k+2}=\varnothing ,\dotsc }$ . By P3,

${\displaystyle \mathbb {P} \left(\bigcup _{i=1}^{k}A_{i}\right)=\mathbb {P} \left(\bigcup _{i=1}^{\infty }A_{i}\right)=\sum _{i=1}^{\infty }\mathbb {P} (A_{i})=\sum _{i=1}^{k}\mathbb {P} (A_{i})}$

since ${\displaystyle \sum _{i=k+1}^{\infty }\mathbb {P} (A_{i})=\mathbb {P} (\varnothing )+\dotsb =0}$ .

${\displaystyle \Box }$

Proposition. (Simplified law of total probability) For each event ${\displaystyle A}$  and ${\displaystyle B}$ , ${\displaystyle \mathbb {P} (B)=\mathbb {P} (B\cap A)=\mathbb {P} (B\setminus A)}$ .

Proof.

${\displaystyle \mathbb {P} (B)=\mathbb {P} (B\cap (\underbrace {A\cup A^{c}} _{\Omega }))=\mathbb {P} {\big (}(B\cap A)\cup (\underbrace {B\cap A^{c}} _{:=B\setminus A}){\big )}{\overset {\text{ ext. P3 }}{=}}\mathbb {P} (B\cap A)+\mathbb {P} (B\setminus A)}$

[4]

${\displaystyle \Box }$

Illustration of simplified law of total probability:

|---------|
|  B\A    | <----- B
|    |----|-----|
|    |BnA |     |
|----|----|     | <---- A
|----------|


Proposition. (Simplified inclusion-exclusion principle) For each event ${\displaystyle A}$  and ${\displaystyle B}$ , ${\displaystyle \mathbb {P} (A\cup B)=\mathbb {P} (A)+\mathbb {P} (B)-\mathbb {P} (A\cap B)}$ .

Proof. Since events ${\displaystyle A}$  and ${\displaystyle B\setminus A}$  are disjoint, by extended P3,

${\displaystyle \mathbb {P} (A\cup (B\setminus A))=\mathbb {P} (A)+\mathbb {P} (B\setminus A)=\mathbb {P} (A)+(\mathbb {P} (B)-\mathbb {P} (B\cap A))=\mathbb {P} (A)+\mathbb {P} (B)-\mathbb {P} (A\cap B)}$

since ${\displaystyle \mathbb {P} (B)=\mathbb {P} (B\cap A)+\mathbb {P} (B\setminus A)\Rightarrow \mathbb {P} (B\setminus A)=\mathbb {P} (B)-\mathbb {P} (B\cap A)}$ .

${\displaystyle \Box }$

Illustration of simplified inclusion-exclusion principle:

|---------|
|         | <----- B
| II |----|-----|
|    |AnB |     |
|----|----| I   | <---- A
|----------|


${\displaystyle \mathbb {P} (A\cup B)=\mathbb {P} ({\text{I}})+\mathbb {P} ({\text{II}})+\mathbb {P} (A\cap B)=\underbrace {\mathbb {P} ({\text{I}})+\mathbb {P} (A\cap B)} _{\mathbb {P} (A)}+\underbrace {\mathbb {P} ({\text{II}})+\mathbb {P} (A\cap B)} _{\mathbb {P} (B)}-\mathbb {P} (A\cap B)}$

Proposition. (Complement rule) For each event ${\displaystyle E}$ , ${\displaystyle \mathbb {P} (E)=1-\mathbb {P} (E^{c})}$ .

Proof.

${\displaystyle \mathbb {P} (E)=\mathbb {P} (E)+\mathbb {P} (E^{c})-\mathbb {P} (E^{c}){\overset {\text{ ext. P3 }}{=}}\mathbb {P} (\underbrace {E\cup E^{c}} _{\Omega })-\mathbb {P} (E^{c}){\overset {\text{ P2 }}{=}}1-\mathbb {P} (E^{c})}$

${\displaystyle \Box }$

Illustration of complement rule:

|---------------|
|               |
|      E^c      | <--- Omega (Pr(Omega)=1)
|    |---|      |
|    | E |      |
|    |---|      |
|---------------|


Proposition. (Numeric bound for probability) For each event ${\displaystyle E}$ , ${\displaystyle 0\leq \mathbb {P} (E)\leq 1}$ .

Proof. By P1, ${\displaystyle \mathbb {P} (E)\geq 0}$ , and ${\displaystyle \mathbb {P} (E^{c})\geq 0}$ . So, ${\displaystyle \mathbb {P} (E)\leq \mathbb {P} (E)+\mathbb {P} (E^{c})=\mathbb {P} (E)+(1-\mathbb {P} (E))=1}$

${\displaystyle \Box }$

Proposition. (Monotonicity) If ${\displaystyle A\subseteq B}$ , then ${\displaystyle \mathbb {P} (A)\leq \mathbb {P} (B)}$ .

Proof. By simplified law of total probability,

${\displaystyle \mathbb {P} (B)=\mathbb {P} (\underbrace {B\cap A} _{A})+\mathbb {P} (B\setminus A){\overset {\text{ P1 }}{\geq }}\mathbb {P} (A){\cancel {+0}}.}$

${\displaystyle \Box }$

Example. The probability of winning the champion in a competition is less than or equal to that of entering the final of the competition, by monotonicity.

Proof. Let ${\displaystyle C}$  and ${\displaystyle F}$  the event of winning the champion in the competition, and entering the final of the competition respectively. Then, ${\displaystyle C\subseteq F}$ , since ${\displaystyle C\Rightarrow ({\text{implies}})\;F}$  (when we win the champion, then we must enter the final), and so ${\displaystyle \mathbb {P} (C)\leq \mathbb {P} (F)}$ .

${\displaystyle \Box }$

Exercise.

Select all correct statement(s). All following capital letters are events.

 If ${\displaystyle A=B}$ , then ${\displaystyle \mathbb {P} (A)=\mathbb {P} (B)}$  . ${\displaystyle \mathbb {P} {\big (}A\setminus (B\cup C){\big )}=\mathbb {P} (A)+\mathbb {P} (A\cap B)+\mathbb {P} (A\cap C)-\mathbb {P} (A\cap B\cap C)}$ . ${\displaystyle {\frac {\mathbb {P} (A\cap B)}{\mathbb {P} (B)}}=1}$  if ${\displaystyle A\subseteq B}$  and ${\displaystyle \mathbb {P} (B)>0}$ . ${\displaystyle 0\leq {\frac {\mathbb {P} (A\cap B)}{\mathbb {P} (B)}}\leq 1}$  if ${\displaystyle \mathbb {P} (B)>0}$  .

### More advanced properties of probability

Theorem. (Inclusion-exclusion principle (probability))

Illustration of inclusion-exclusion principle when ${\displaystyle n=3}$

For each event ${\displaystyle E_{1},\dotsc ,E_{n}}$ ,

{\displaystyle {\begin{aligned}\mathbb {P} (E_{1}\cup \dotsb \cup E_{n})&=\mathbb {P} (E_{1})+\dotsb +\mathbb {P} (E_{n})\\&\;-{\big (}\mathbb {P} (E_{1}\cap E_{2})+\mathbb {P} (E_{1}\cap E_{3})+\dotsb +\mathbb {P} (E_{n-1}\cap E_{n}){\big )}\\&\;+{\big (}\mathbb {P} (E_{1}\cap E_{2}\cap E_{3})+\mathbb {P} (E_{1}\cap E_{2}\cap E_{4})+\dotsb +\mathbb {P} (E_{n-2}\cap E_{n-1}\cap E_{n}){\big )}\\&\;-\dotsb \\&\;+(-1)^{n+1}\mathbb {P} (E_{1}\cap \dotsb \cap E_{n}).\end{aligned}}}

Proof. We can prove this by induction.

Recall the simplified inclusion-exclusion principle, which is essentially the inclusion-exclusion principle when ${\displaystyle n=2}$ . So, we know that the inclusion-exclusion principle is true for ${\displaystyle n=2}$ , and it remains to prove the case with larger ${\displaystyle n}$ .

The idea of the induction is illustrated as follows: by simplified inclusion-exclusion principle,

{\displaystyle {\begin{aligned}\mathbb {P} ((E_{1}\cup \dotsb \cup E_{n-1})\cup {\color {darkgreen}E_{n}})&=\mathbb {P} (E_{1}\cup \dotsb \cup E_{n-1})+\mathbb {P} ({\color {darkgreen}E_{n}})-\mathbb {P} {\big (}(E_{1}\cup \dotsb \cup E_{n-1})\cap {\color {darkgreen}E_{n}}{\big )}\\&=\mathbb {P} (E_{1}\cup \dotsb \cup E_{n-1})+\mathbb {P} ({\color {darkgreen}E_{n}})-\mathbb {P} {\big (}(E_{1}\cap {\color {darkgreen}E_{n}})\cup \dotsb \cup (E_{n-1}\cap {\color {darkgreen}E_{n}}){\big )}\\&=\dotsb \end{aligned}}}

${\displaystyle \Box }$

Remark.

• We can write the inclusion-exclusion principle more compactly as follows:

${\displaystyle \mathbb {P} (E_{1}\cup \dotsb \cup E_{n})=\sum _{j=1}^{n}(-1)^{j+1}\sum _{i_{1}<\dotsb

• An alternative and more elegant proof is provided in the chapter about properties of distributions.
• For the intersections of event, each possible distinct combination is involved.

Example. When ${\displaystyle n=3}$ , for each event ${\displaystyle A}$ , ${\displaystyle B}$  and ${\displaystyle C}$ ,

${\displaystyle \mathbb {P} (A\cup B\cup C)=\mathbb {P} (A)+\mathbb {P} (B)+\mathbb {P} (C)-\mathbb {P} (A\cap B)-\mathbb {P} (A\cap C)-\mathbb {P} (B\cap C)+\mathbb {P} (A\cap B\cap C).}$

Example. We select a student from some university students. It is given that

• the selected student has a major in mathematics with a probability 0.4;
• the selected student has a major in statistics with a probability 0.55;
• the selected student has a major in accounting with a probability 0.3;
• the selected student has a major in statistics and accounting with a probability 0.2;
• the selected student has a major in accounting and mathematics with a probability 0.15;
• the selected student has a major in mathematics and statistics with a probability 0.2;
• the selected student has a major in mathematics, statistics and accounting with a probability 0.1.

Then, the probability that the selected student does not have any of these majors is ${\displaystyle 1-(0.4+0.55+0.3-0.2-0.15-0.2+0.1)=0.2}$ .

Proof. Let ${\displaystyle M}$ , ${\displaystyle S}$ , ${\displaystyle A}$  be the event that the selected student among them has a major in mathematics, statistics and accounting respectively. Then,

{\displaystyle {\begin{aligned}\mathbb {P} (M^{c}\cap S^{c}\cap A^{c})&=\mathbb {P} {\big (}(M\cup S\cup A)^{c}{\big )}=1-\mathbb {P} (M\cup S\cup A)\\&=1-(\mathbb {P} (M)+\mathbb {P} (S)+\mathbb {P} (A)-\mathbb {P} (M\cap S)-\mathbb {P} (M\cap A)-\mathbb {P} (S\cap A)+\mathbb {P} (M\cap S\cap A))\\&=1-(0.4+0.55+0.3-0.2-0.15-0.2+0.1)\\&=1-0.8=0.2.\end{aligned}}}

Alternatively, we can consider the following Venn diagram:
|-------------| <--------- A
|             |
|        |----|----|
|        |    |    |
| 0.05   |0.05|0.15| <---- M
|        |    |    |
|--------|----|----|------|
|        |0.1 |0.1 |      |
| 0.1    |    |    | 0.25 | <---- S
|        |----|----|      |
|-------------|-----------|


We can see from this diagram that ${\displaystyle \mathbb {P} (M\cup S\cup A)=0.05+0.05+0.15+0.1+0.1+0.1+0.25=0.8}$ , and thus the desired probability is ${\displaystyle 1-0.8=0.2}$ .

${\displaystyle \Box }$

Exercise.

1 Calculate the probability that the selected student has at least two of those three majors.

 0.1 0.15 0.2 0.25 0.4

2 Calculate the probability that the selected student has one and only one major.

 0.3 0.35 0.4 0.45 0.5

Lemma. For each event ${\displaystyle E_{1},E_{2},\dotsc }$ ,

${\displaystyle \mathbb {P} \left(\bigcup _{i=1}^{\infty }E_{i}\right){\overset {\text{ def }}{=}}\mathbb {P} \left(\lim _{n\to \infty }\bigcup _{i=1}^{n}E_{i}\right)=\lim _{n\to \infty }\mathbb {P} \left(\bigcup _{i=1}^{n}E_{i}\right).}$

Proof.

${\displaystyle \lim _{n\to \infty }\mathbb {P} \left(\bigcup _{i=1}^{n}E_{i}\right){\overset {\text{ext. P3}}{=}}\lim _{n\to \infty }\sum _{i=1}^{n}\mathbb {P} (E_{i}){\overset {\text{ def }}{=}}\sum _{i=1}^{\infty }\mathbb {P} (E_{i}){\overset {\text{ P3 }}{=}}\mathbb {P} \left(\bigcup _{i=1}^{\infty }E_{i}\right).}$

${\displaystyle \Box }$

Proposition. (Boole's inequality) For each event ${\displaystyle E_{1},E_{2},\dotsc }$ ,

${\displaystyle \mathbb {P} \left(\bigcup _{i=1}^{\infty }E_{i}\right)\leq \sum _{i=1}^{\infty }\mathbb {P} (E_{i}).}$

Proof. First, by inclusion-exclusion principle, for each event ${\displaystyle A}$  and ${\displaystyle B}$ , ${\displaystyle \mathbb {P} (A\cap B)=\mathbb {P} (A)+\mathbb {P} (B)-\mathbb {P} (A\cap B){\overset {\text{ P1 }}{\leq }}\mathbb {P} (A)+\mathbb {P} (B)}$ .

So,

${\displaystyle \mathbb {P} \left(\bigcup _{i=1}^{n}E_{i}\right)\leq \mathbb {P} (E_{1})+\mathbb {P} \left(\bigcup _{i=2}^{n}E_{i}\right)\leq \mathbb {P} (E_{1})+\mathbb {P} (E_{2})+\mathbb {P} \left(\bigcup _{i=3}^{n}E_{i}\right)\leq \dotsb \leq \mathbb {P} (E_{1})+\mathbb {P} (E_{2})+\dotsb +\mathbb {P} (E_{n})=\sum _{i=1}^{n}\mathbb {P} (E_{i}).}$

Using the lemma,

${\displaystyle \mathbb {P} \left(\bigcup _{i=1}^{\infty }E_{i}\right)=\lim _{n\to \infty }\mathbb {P} \left(\bigcup _{i=1}^{n}E_{i}\right){\overset {\text{from above}}{\leq }}\lim _{n\to \infty }\sum _{i=1}^{n}\mathbb {P} (E_{i}){\overset {\text{ def }}{=}}\sum _{i=1}^{\infty }\mathbb {P} (E_{i}).}$

${\displaystyle \Box }$

1. e.g. the sample space of throwing a dice may include the six numbers, or may only include two outcomes: odd number and even number
2. e.g. it is given that a coin is biased, such that it is more likely that head comes up
3. However, it is still possible that the coin is fair.
4. ext. stands for 'extended'