Probability/Print version


Probability

The current, editable version of this book is available in Wikibooks, the open-content textbooks collection, at
https://en.wikibooks.org/wiki/Probability

Permission is granted to copy, distribute, and/or modify this document under the terms of the Creative Commons Attribution-ShareAlike 3.0 License.


Local Manual of Style

Purpose of this bookEdit

The difficulty level of this book should be similar to that of first university-level probability course. In particular, measure theory and related advanced topic should not be included in this book. Instead, they should be included in the Probability Theory wikibook (for measure-theoretic probability), or Measure Theory wikibook (for measure theory itself).

Applications of probability can be included briefly, but are not the main focus of this wikibook.

Some notations and abbreviationsEdit

NotationsEdit

In some occasions, these notations may have different meanings compared with those stated in the following. The words explaining the meaning of the following notations in the actual content take precedence.

  • : equals by definition
  • CAPITAL letters (possibly with subscripts): sets [1] or random variables [2];
  • small letters (possibly with subscripts): variables or elements in sets;
  • : the union of and ;
  • : the intersection of and ;
  • : the relative complement of in
  • : is a subset of ;
  • : is a proper subset of ;
  • : the (absolute) complement of ;
  • : a universal set;
  • : the cardinality of ;
  • : the power set of ;
  • : the binomial coefficient indexed by and ;
  • : a sample space;
  • : a event space;
  • : the probability (function);
  • : and are independent;
  • : a cumulative distribution function;
  • : a probability mass or probability density function;
  • : the support of ;
  • : the binomial distribution with independent Bernoulli trials with success probability ;
  • : the Bernoulli distribution with one Bernoulli trial with success probability ;
  • : the Poisson distribution with rate parameter ;
  • : the geometric distribution with success probability ;
  • : the negative binomial distribution (number of failures before th successes) with success probability ;
  • : the hypergeometric distribution with population size containing objects of type 1, objects of another type, and objects drawn;
  • : the finite discrete distribution with vector and probability vector ;
  • : the discrete uniform distribution;
  • : the uniform distribution over the interval ;
  • [3]: the exponential distribution with rate parameter ;
  • : the gamma distribution with shape parameter and rate parameter ;
  • : the beta distribution with shape parameters and ;
  • : the Cauchy distribution with location parameter (with scale parameter 1);
  • : the normal distribution with mean and variance ;
  • : the chi-squared distribution with degrees of freedom;
  • : the Student's distribution with degrees of freedom;
  • : the -distribution with and degrees of freedom;
  • : the multinomial distribution with trials and probability vector .
  • : the -dimensional multivariate normal distribution with mean vector and covariance matrix ;
  • (or ): the mean of ;
  • (or ): the variance of ;
  • : the standard deviation of ;
  • : the covariance of and ;
  • (or ) : the correlation coefficient of and ;
  • Bold CAPITAL letters (e.g. , and possibly with subscript): random vectors;
  • Bold small letters (e.g. , and possibly with subscript): vectors;
  • : the transpose of ;
  • : the dot product of and .

AbbreviationsEdit

  • no.: number;
  • r.v.: random variable;
  • cdf: cumulative distribution function;
  • pmf: probability mass function;
  • pdf: probability density function;
  • s.d.: standard deviation;
  • df: degrees of freedom;
  • It is usually denoted by (stands for 'nu', possibly with subscript).
  • i.i.d: independent and identically distributed;
  • mgf: moment generating function;
  • CLT: Central Limit Theorem.

ConventionsEdit

  • Use title casing for subpage (called chapter) titles, and use sentence casing for section titles.
  • Use LaTeX (instead of HTML) for all math-related variables, formulas, notations etc., to ensure consistency in appearance[4].
  • Use <math></math> for inline math[5];
  • Use <math display=block></math> for display math (i.e. formulas on its own line);
  • Use quizzes (if possible) for exercises.
  • Try to use mnemonic notations (if possible). E.g., for a set, for time, etc. [6]

TemplatesEdit



Introduction

OverviewEdit

Probability theory provides a mathematical model for the study of randomness and uncertainty. Many important decisions, whether from business, government, science, recreation or even one's personal life must be made with incomplete information or some degree of uncertainty. Hence, a formalized study of uncertain or random outcomes occupies an important role in modern society. In situations where one of any number of possible outcomes may occur, the mathematical model of probability theory offers methods for quantifying the likelihoods associated with those outcomes. Probability also provides tools which allow us to move beyond simply describing the information contained within a set of data (descriptive statistics) to actually inferring further information from that data (inferential statistics). Many of the early attempts to model likelihood arose from games of chance. For a brief history of probability see this Wikipedia article.

Although probability theory is now a very formal branch of mathematics, the language of probability is often used informally in everyday speech. We express our beliefs about likelihoods of outcomes in situations involving uncertainty using intuition guided by our experiences and in some cases statistics. Consider the following examples:

  • Bill says "Don't buy the avocados here; about half the time, they're rotten". Bill is expressing his belief about the probability of an event — that an avocado will be rotten — based on his personal experience.
  • Lisa says "I am 95% certain the capital of Spain is Barcelona". Here, the belief Lisa is expressing is only a probability from her point of view, because only she does not know that the capital of Spain is Madrid (from our point of view, the probability is 100%). However, we can still view this as a subjective probability because it expresses a measure of uncertainty. It is as though Lisa is saying "in 95% of cases where I feel as sure as I do about this, I turn out to be right".
  • Susan says "There is a lower chance of being shot in Omaha than in Detroit". Susan is expressing a belief based (presumably) on statistics.
  • Dr. Smith says to Christina, "There is a 75% chance that you will live." Dr. Smith is basing this off of his research.
  • Nicolas says "It will probably rain tomorrow." In this case the likelihood that it will rain is expressed in vague terms and is subjective, but implies that the speaker believes it is greater than (or 50%). Subjective probabilities have been extensively studied, especially with regards to gambling and securities markets. While this type of probability is important, it is not the subject of this book. A good reference is "Degrees of Belief" By Steven Vick (2002).

Notice that in the previous examples the likelihood of any particular outcome is expressed as a percentage (between 0% and 100%), as is common in everyday language. However, probabilities in formal probability theory are always expressed as real numbers in the interval (e.g. a probability of .25 may be expressed as 25%, or a probability of may be expressed as approximately 31.83%). Other differences exist between common expressions of probabilities and formal probability theory. For example, a probability of 0% is typically taken to mean that the event to which that probability is assigned is impossible. However, in probability theory (usually in cases where there are infinitely many possible outcomes) an event ascribed a probability of zero may actually occur. In some situations, it is certain that such an event will occur (e.g. in selecting a real number between 0 and 1, the probability of selecting any given number is zero, but it is certain that one such number will be selected).

Another way to express the probability of an outcome is by its odds: the ratio of the probability of "success" (event occurs) to the probability of "failure" (event does not occur). In gambling odds are expressed as the ratio of the stakes risked by each participant in a wager. For instance: a bookmaker offering odds of 3 to 1 "against" a horse will pay a punter three times their stake (if the horse wins). In fact, the bookmaker (ignoring factors such as his potential need to "lay off" bets which are exposing him to the possibility of an unacceptable overall loss) is announcing that he thinks the horse has a chance of winning. If we express odds as "chance of winning": "chance of not winning", then 3 to 1 against would be represented as or . So an event with a probability of or 25% has odds of 33%. This disparity is even more clear where an event has a probability of 50% (e.g., the odds of a coin showing heads is 50%:50% = 1:1 or ).

Types of probabilityEdit

As mentioned earlier, probability can be expressed informally in a variety of different ways, but even formal definitions and approaches vary. The most general and rigorous approach is known as axiomatic probability theory, which will be the focus of later chapters. Here we briefly discuss a few other approaches, their uses and limitations. All of these approaches rely in one way or another on the concept of an experiment. Recall that probability provides means to study randomness and uncertainty.

An experiment is any action or process whose outcome is subject to uncertainty or randomness.

Here the term experiment is used in a wider sense than its usual connotation with controlled laboratory situations. Further clarification on experiments will be given later, but for now the following examples of experiments will suffice:

  • observing whether or not a commercial product is defective.
  • tossing a coin one or more times or selecting a card from a card deck.
  • conducting a survey.
  • measuring the wind speed or rainfall in a particular area.

Assuming that an experiment can be repeated under identical conditions, then each repetition of an experiment is called a trial.

Basic ConceptsEdit

There are two standard approaches to conceptually interpreting probabilities: the relative frequency approach and the subjective belief (or confidence approach). In the Frequency Theory of Probability, probability is the limit of the relative frequency with which certain outcomes occur in repeated trials (note that the outcome of any single trial cannot depend on the outcome of other trials). The relative frequency approach requires that experiments be random and that all possible outcomes be known before execution of the experiment. The probability of any set of outcomes is expressed as the relative frequency with which those outcomes will occur among many repeated trials.

Physical probabilities fall within the category of objective or frequency probabilities, and are associated with random physical systems such as roulette wheels, rolling dice and radioactive atoms. In such systems, a given outcome (such as a die yielding a six) tends to occur at a persistent rate, or 'relative frequency', in a long run of trials. Physical probabilities either explain, or are invoked to explain these stable frequencies.

Relative frequency probabilities are always expressed as a figure between 0% (the outcome essentially never happens) and 100% (the outcome essentially always happens), or similarly as a figure between 0 and 1. According to the Frequency Theory of Probability, saying that "the probability that A occurs is p%" means that if you repeat the experiment many times under essentially identical conditions, the percentage of time for which A occurs will converge to p. For example, a 50% chance that a coin lands "heads up" means that if you toss the coin over and over again, then the ratio of times the coin lands heads to the total number of tosses approaches a limiting value of 50% as the number of tosses grows. Notice that the outcome of one toss never depends on another toss, and that the ratio of heads to total number of tosses is always between 0% and 100%.

In the Subjective Theory of Probability, probability measures the speaker's "degree of belief" that a set of outcomes will result, on a scale of 0% (complete disbelief that the event will happen) to 100% (certainty that the event will happen). According to the Subjective Theory, saying that "the probability that A occurs is " means that I believe that A will happen twice as strongly as I believe that A will not happen. The Subjective Theory is particularly useful in assigning meaning to the probability of outcomes that in principle can occur only once. For example, how might one assign meaning to the following statement: "there is a 25% chance of an earthquake on the San Andreas fault with magnitude 8 or larger before 2050"? It would be very hard to qualify this measure in terms of relative frequency.

One way to represent an individual's degree of belief in a statement, given available evidence, is with the Bayesian approach. Evidential probability, also called Bayesian probability, can be assigned to any statement whatsoever, even when no random process is involved. On most accounts evidential probabilities are considered degrees of belief, defined in terms of dispositions to gamble at certain odds. The primary evidential interpretations include the classical interpretation, the subjective interpretation, the epistemic or inductive interpretation, and the logical interpretation.

The next several sections discuss the principal theories within the relative frequency approach to probability.

Classical theory of probabilityEdit

The classical approach to probability expresses probability as a ratio of the number of favorable outcomes in a series of successive trials to the number of total possible outcomes. Note the immediate implication that the number of total possible outcomes be known. Furthermore, all possible outcomes are assumed to be equally probably and no two possible outcomes can both result from the same trial. Here, the term "favorable" is not subjective, but rather indicates that an outcome belongs to a group of outcomes of interest. This group of outcomes is called an event, which will be formalized with the introduction of axiomatic probability theory.

Classical definition of probability
If the number of outcomes belonging to an event is , and the total number of outcomes is , then the probability of event is defined as .

For example, a standard deck of cards (without jokers) has 52 cards. If we randomly draw a card from the deck, we can think of each card as a possible outcome. Therefore, there are 52 total outcomes. We can now look at various events and calculate their probabilities:

  • Out of the 52 cards, there are 13 clubs. Therefore, if the event of interest is drawing a club, there are 13 favorable outcomes, and the probability of this event is .
  • There are 4 kings (one of each suit). The probability of drawing a king is .
  • What is the probability of drawing a king OR a club? This example is slightly more complicated. We cannot simply add together the number of outcomes for each event separately () as this inadvertently counts one of the outcomes twice (the king of clubs). The correct answer is from where this is essentially .

Classical probability suffers from a serious limitation. The definition of probability implicitly defines all outcomes to be equiprobable. While this might be useful for drawing cards, rolling dice, or pulling balls from urns, it offers no method for dealing with outcomes with unequal probabilities.

This limitation can even lead to mistaken statements about probabilities. An often given example goes like this:

I could be hit by a meteor tomorrow. There are two possible outcomes: I will be hit, or I will not be hit. Therefore, the probability I will be hit by a meteor tomorrow is .

Of course, the problem here is not with the classical theory, merely the attempted application of the theory to a situation to which it is not well adapted.

This limitation does not, however, mean that the classical theory of probability is useless. At many points in the development of the axiomatic approach to probability, classical theory is an important guiding factor.

Empirical or Statistical Probability or Frequency of occurrenceEdit

This approach to probability is well-suited to a wide range of scientific disciplines. It is based on the idea that the underlying probability of an event can be measured by repeated trials.

Empirical or Statistical Probability as a measure of frequency
Let be the number of times event occurs after trials. We define the probability of event as

It is of course impossible to conduct an infinite number of trials. However, it usually suffices to conduct a large number of trials, where the standard of large depends on the probability being measured and how accurate a measurement we need.

A note on this definition of probability: How do we know the sequence in the limit will converge to the same result every time, or that it will converge at all? The unfortunate answer is that we don't. To see this, consider an experiment consisting of flipping a coin an infinite number of times. We are interested in the probability of heads coming up. Imagine the result is the following sequence:

HTHHTTHHHHTTTTHHHHHHHHTTTTTTTTHHHHHHHHHHHHHHHHTTTTTTTTTTTTTTTT...

with each run of heads and tails being followed by another run twice as long. For this example, the sequence oscillates between roughly and and doesn't converge.

We might expect such sequences to be unlikely, and we would be right. It will be shown later that the probability of such a run is 0, as is a sequence that converges to anything other than the underlying probability of the event. However, such examples make it clear that the limit in the definition above does not express convergence in the more familiar sense, but rather some kind of convergence in probability. The problem of formulating exactly what this means belongs to axiomatic probability theory.

Axiomatic probability theoryEdit

Although axiomatic probability theory is often frightening to beginners, it is the most general approach to probability and has been employed in tackling some of the more difficult problems in probability. It begins with a set of axioms which, although not immediately intuitive, are guided by the more familiar classical probability theory. These axioms are discussed in the (as yet unwritten) following chapter.

About This BookEdit

This book is going to discuss the topic of mathematical probability using Calculus and Linear Algebra. Readers of this book should have a good understanding of both those topics before attempting to read and understand this book completely.



Set Theory

IntroductionEdit

The overview of set theory contained herein adopts a naive point of view. A rigorous analysis of the concept belongs to the foundations of mathematics and mathematical logic. Although we shall not initiate a study of these fields, the rules we follow in dealing with sets are derived from them.

SetsEdit

Definition.

A set.

(Set)

  • A set is a well-defined collection of distinct object(s), which are called element(s).
  • We say that an element belongs to the set.
  • If belongs to a set , we write .
  • If does not belong to the set , we write .

Remark.

  • When and are (not) equal, denoted by , and are different symbols denoting the same (different) object(s).

Example. (Collection that is not well-defined)

  • The collection of easy school subjects is not a set.

We have different ways to describe a set, e.g.

  • word description: e.g., a set is the set containing the 12 months in a year;
  • listing: elements in a set are listed within a pair of braces, e.g., ;
the ordering of the elements is not important, i.e. even if the elements are listed in different order, the set is still the same. E.g., is still referring to the same set.
  • set-builder notation:
in which the closing brace must also be written. E.g., .
  • In particular, since a set contains distinct objects, the months contained in this set are distinct, and therefore there are only 12 elements in this set.

Example. (Empty set) The set is called an empty set, and it contains no element. It is commonly denoted by also.

Clipboard

Exercise.

Is an empty set?

Yes.
No.



Example.

  • ;
  • ;
  • .
Clipboard

Exercise.

Select all element(s) belonging to the set .



Definition. (Set equality) When two sets are equal, they contain the same elements.

Remark.

  • Equivalently, two sets and are equal if each element of is also element of and each element of is also element of .
  • We use to denote sets and are (not) equal.

Example.

  • , , and the set that contains only an empty set are pairwise equal.
  • .

Definition. (Universal set) Universal set, denoted by , is the set that contains all objects being considered in a particular context.

Remark.

  • In the context of probability, a universal set, which is usually denoted by instead, is the set containing all outcomes of a particular random experiment, and is also called a sample space.

Definition. (Cardinality) Cardinality of a finite set, which is a set containing finite number of elements, is the number of its elements.

Remark.

  • Cardinality of set can be denoted by (or )
  • We do not use the notation to avoid ambiguity.
  • Infinite set is a set containing infinite number of elements.
  • We will leave the cardinality of infinite set undefined in this book, but it can be defined, in a more complicated way.

Example.

  • .
  • .
  • (the set containing each positive number) is an infinite set.
Clipboard

Exercise.

Calculate .

0
1
2
3
None of the above.




SubsetsEdit

We introduce a relationship between sets in this section.

Definition. (Subset)

  • If each element of set is an element of set , then is a subset of , denoted by .
  • If is not a subset of , then we write .

Remark.

  • By referring to the definitions of subsets and set equality, we can see that is equivalent to (or if and only if) and .
  • The notation means that is a superset of , which means that is a subset of .
  • This notation and terminology are seldom used.

Definition. (Venn diagram) A Venn diagram is a diagram that shows all possible logical relations between finitely many sets.

Remark.

  • It is quite useful for illustrating some simple relationships between sets, and making the relationships clear.
  • We may also add various annotations in the Venn digram, e.g. cardinality of each set, and the element(s) contained by each set.

Illustration of subset by Venn diagram:

A ⊆ B (A ≠ B):

*-----------------------*
|                       |
|                       |
|   *----------*        | <---- B
|   |          |        |
|   |    A     |        |
|   |          |        |
|   *----------*        |
*-----------------------*

Example.

  • ;

Venn digram:

*-----------------------*
|                       |
|                       |
|   *----------*  2     | 
|   |          |        |
|   |    1  3  |        |
|   |          |        |
|   *----------*        |
*-----------------------*
  • ;

Venn diagram:

*-----------------------*
|                       |
|                       |
|   *----------*  {1}   | 
|   |          |        |
|   |  1 2  3  |        |
|   |          |        |
|   *----------*        |
*-----------------------*
  • for each set  ;
  • for each set .

Example. (Intervals) Intervals are commonly encountered subsets of . If and are (extended) real numbers [7]such that , then

In particular, , and is the set containing all extended real numbers.

Definition. (Proper subset)

  • Set is a proper subset of set if and ;. We write in this case.
  • If set is not a proper subset of , then we write (but we rarely write this).

Remark.

  • The notation means that is a proper superset of , which means that is a proper subset of .
  • This notation and terminology are seldom used.

Example.

  • .

Definition. (Complement) Let be a subset of universal set . The (absolute) complement of , denoted by , is the set .

Example. If and , then .

Venn diagram:

*-----------------------*
|                       |
|                4  5   |
|   *----------*        | 
|   |          |        | <---- U
|   |  1 2  3  |        |
|   |          |        |
|   *----------*        |
*--------^--------------*
         |
         A
Clipboard

Exercise.

Find .

None of the above.




Set operationsEdit

Probability theory makes extensive use of some set operations, and we will discuss them in this section.

Definition.

Union of two sets is indicated by the red region.

(Union of sets) Union of set and set , denoted by , is the set .

Remark.

  • is read 'A cup B'.
  • We can denote by (if the sequence of unions stops at ), or (if the sequence of unions does not stop).

Example.

  • .

Venn diagram:

*----------------*
|                |
|  red   *-------*--------*
|        | orange|        |
*--------*-------*        |
         |       apple    |
         *----------------*

Proposition. (Properties of union of sets) Let , and be sets. Then, the following statements hold.

(a) ;
(b) (commutative law);
(c) (associative law);
(d) and ;
(e) ;
(f) if and only if .

Proof. Informally, consider the following Venn diagrams:

(a)
*----*
|    | <---- A ∪ A (a set overlaps itself completely)
|    | <--- A
*----*
(b)
*----------------*
|////////////////| <---- A 
|////////*-------*--------*               
|////////|///////|////////| 
*--------*-------*////////| <--- B
         |////////////////|
         *----------------*
(shaded region refers to both A ∪ B and B ∪ A)
(c)
*----------*
|//////////| <--- A
|//////////| 
|/////*----|----*
|/////|////|////| <---- C
*-----*----*----*------------*
|/////|////|////|////////////| <--- B
|/////*----|----*////////////|
*----------*-----------------*
(shaded region refers to both A∪(B∪C) and (A∪B)∪C)
(d)
*----------------*
|                | <---- A 
|        *-------*--------*               
|        |       |        | 
*--------*-------*        | <--- B
         |                |
         *----------------*
and are both inside the whole region, which represents .
(e)
*----*
|    |
|    | .
*----*
(f)
*-----------------------*
|                       |
|                       |
|   *----------*        | <---- B
|   |          |        |
|   |    A     |        |
|   |          |        |
|   *----------*        |
*-----------------------*
(the whole region is both A u B and B)
We use a dot, which has zero area, to represent . Then, we can see that the union of the region and the dot is the region itself.

Remark.

  • Formal proofs of propositions and theorems about sets will not be emphasized in this book.
  • Instead, we will usually prove the propositions and theorems informally, e.g. using Venn diagram.

Definition. (Intersection of sets)

Intersection of two sets.

Intersection of set and set , denoted by , is the set .

Remark.

  • is read 'A cap B'.
  • We can denote by (if the sequence of intersections stops at ), or (if the sequence of intersection does not stop).

Example.

  • ;
  • .

Definition. (Disjoint sets) Set and set are disjoint (or mutually exclusive) if .

Remark.

  • I.e., and are disjoint if they have no element in common.
  • More than two events are disjoint if they are pairwise disjoint.

Venn diagram

*-----*       *-----*       *-----*       
|     |       |     |       |     |
|  A  |       |  B  |       |  C  |
*-----*       *-----*       *-----*

(A, B and C are disjoint)
      
*----------------*
|                | <---- D 
| *--*   *-------*--------*               
| |  |   |       |        | 
*-*--*---*-------*        | <--- E
  |  |   |                |
  *--*   *----------------*
   ^
   |
   F

(D, E and F are not disjoint, but E and F are disjoint)

Definition. (Partition of a set) A collection of sets form a partition of a set if the sets in the collection are disjoint and their union is .

Venn diagram

*-----------------------*
| \        A            |
|  \                    |
|B  *-------------------*
|   /                   |
|  /       C            |
| /         *-----------*  <----- S
|/         /            |
*\        /             |
| *------*              |
|        |    E         |
|  D     |              |
*--------*--------------*

(A,B,C,D and E form a partition of S)

Proposition. (Properties of intersection of sets) Let , and be sets. Then, the following statements hold.

(a) ;
(b) (commutative law);
(c) (associative law);
(d) and ;
(e) ;
(f) if and only if .

Proof. Informally, consider the following Venn diagrams:

(a)
*----*
|    | <---- A ∩ A (a set overlaps itself completely)
|    | <--- A
*----*
(b)
*----------------*
|                | <---- A 
|        *-------*--------*               
|        |A∩B=B∩A|        | 
*--------*-------*        | <--- B
         |                |
         *----------------*
(c)
*----------*
|          | <--- A
|          | 
|     *----*----*
|     |    |    | <---- C
*-----*----*----*------------*
|     |////|    |            | <--- B
|     *----*----*            |
|          |                 |
*----------*-----------------*

*----*
|////| : A∩(B∩C)=(A∩B)∩C
*----*
(d)
*----------------*
|                | <---- A 
|        *-------*--------*               
|        | A ∩ B |        | 
*--------*-------*        | <--- B
         |                |
         *----------------*
(A ∩ B is inside A, and A ∩ B is inside B)
(e)
*----*
|    |
| .  |  
*----*
We use a dot, which has zero area, to represent . Then, we can see that the intersection of the region and the dot is the dot.

Proposition. (Distributive law) Let , and be sets. Then, the following statements hold.

(a) ;
(b) .

Proof.

(a)
*----------*
|          | <--- A
|          | 
|     *----*----*
|     |    |    | <---- C
*-----*----*----*------------*
|/////|////|    |            | <--- B
|/////*----*----*            |
|//////////|                 |
*----------*-----------------*

*----*
|////| : AnB
*----*

*----------*
|          | <--- A
|          | 
|     *----*----*
|     |////|    | <---- C
*-----*----*----*------------*
|     |////|    |            | <--- B
|     *----*----*            |
|          |                 |
*----------*-----------------*

*----*
|////| : AnC
*----*

*----------*
|          | <--- A
|          | 
|     *----*----*
|     |////|    | <---- C
*-----*----*----*------------*
|/////|////|    |            | <--- B
|/////*----*----*            |
|//////////|                 |
*----------*-----------------*

*----*
|////| : (AnB)u(AnC)
*----*

*----------*
|          | <--- A
|          | 
|     *----*----*
|     |////|////| <---- C
*-----*----*----*------------*
|/////|////|////|////////////| <--- B
|/////*----*----*////////////|
|//////////|/////////////////|
*----------*-----------------*

*----*
|////| : BuC
*----*

*----------*
|          | <--- A
|          | 
|     *----*----*
|     |////|    | <---- C
*-----*----*----*------------*
|/////|////|    |            | <--- B
|/////*----*----*            |
|//////////|                 |
*----------*-----------------*

*----*
|////| : An(BuC)
*----*
(b)
*----------*
|//////////| <--- A
|//////////| 
|/////*----*----*
|/////|////|    | <---- C
*-----*----*----*------------*
|/////|////|////|////////////| <--- B
|/////*----*----*////////////|
|//////////|/////////////////|
*----------*-----------------*

*----*
|////| : AuB
*----*

*----------*
|//////////| <--- A
|//////////| 
|/////*----*----*
|/////|////|////| <---- C
*-----*----*----*------------*
|/////|////|////|            | <--- B
|/////*----*----*            |
|//////////|                 |
*----------*-----------------*

*----*
|////| : AuC
*----*

*----------*
|//////////| <--- A
|//////////| 
|/////*----*----*
|/////|////|    | <---- C
*-----*----*----*------------*
|/////|////|////|            | <--- B
|/////*----*----*            |
|//////////|                 |
*----------*-----------------*

*----*
|////| : (AuB)n(AuC)
*----*

*----------*
|          | <--- A
|          | 
|     *----*----*
|     |    |    | <---- C
*-----*----*----*------------*
|     |////|////|            | <--- B
|     *----*----*            |
|          |                 |
*----------*-----------------*

*----*
|////| : B∩C
*----*

*----------*
|//////////| <--- A
|//////////| 
|/////*----*----*
|/////|////|    | <---- C
*-----*----*----*------------*
|/////|////|////|            | <--- B
|/////*----*----*            |
|//////////|                 |
*----------*-----------------*

*----*
|////| : Au(B∩C)
*----*

Definition. (Relative complement)

Relative complement of (left) in (right).

Relative complement of set in set , denoted by , is the set .

Remark.

  • If is the universal set and is a subset of , then .
  • is read 'B minus A'.

Example.

  • ;
  • ;
  • .

Proposition. (Properties of relative complement) Let and be sets. Then, the following statements hold.

(a) ;
(b) ;
(c) ;
(d) if and only if ;
(e) ;
(f) .

Proof.

  • can be viewed as 'removing' the region of from the region of .
(a)
*--*
|A |  removing the whole region <=> empty region left
*--*
(b)
*--*
|A |  removing empty region <=> whole region left
*--*
(c)
.  removing anything from an empty region <=> still an empty region
(d)
*----*
|    | <-- B
*--* |
|A | |    removing B from A becomes empty region <=> region B is not smaller than A
*--*-*
(e)
*-----*
| A\B | 
*-----*-----*
|     | B\A | <--- B
*-----*-----*
   ^
   |
   A
(A\B and B\A are always disjoint)  
(f)
*-----*
|     |
*-----*-----*
|     | B\A | <--- B
*-----*-----*
   ^
   |
   A

(A and B\A are always disjoint)  

Theorem. (De Morgan's laws) Let be sets. Then,

Proof.

(only 3 sets involved)
*-------------------------------*
|                    IV         |
|   *---------*                 |
|   |    I    | <--- A_1        | <--- B
|   *---------*-------*         |
|   |    II   | III   |<--- A_2 |
|   *---------*-------*         |
*-------------------------------* 
B\(A_1uA_2)=IV
(B\A_1)=III u IV \
                  ----> intersection: IV
(B\A_2)=I u IV   /

B\(A_1nA_2)=I u III u IV
(B\A_1)=III u IV \
                  ----> union: I u III u IV
(B\A_2)=I u IV   /

Remark.

  • If , then the equations become .

Example. Let and . Then,

  • ;
  • ;
*--------------------------*
|   *-------------------*  |
| 5 |///////////////////|  |
|   |/4/*------------*//|  |
|   |///|//////2/////|//|  |
|   |///|/*-------*//|//|  |
|   |///|/| 1    3|//|//|  |
|   |///|/|  AnB  |//|//| <-------- C
|   |///|/*-------*//|//|  |
|   |///|////////////|//|  |
|   |///*------------*//|  |
|   |///////////////////|  |
|   *-------------------*  |
*--------------------------*

*---*
|///| : C\(AnB)
*---*

*--------------------------*
|   *-------------------*  |
| 5 |\.\.\.\.\.\.\.\.\.\|  |
|   |\4\*------------*\.|  |
|   |\.\|.A\B..2.....|\.|  |
|   |\.\|.*-------*..|\.|  |
|   |\.\|.| 1    3|..|\.|  |
|   |\.\|.|   B   |..|\.| <-------- C
|   |\.\|.*-------*..|\.|  |
|   |\.\|............|\.|  |
|   |\.\*------------*\.|  |
|   |\.\\.\.\.\.\.\.\.\.|  |
|   *-------------------*  |
*--------------------------*

*---*
|...| : C\B
*---*
*---*
|\\\| : C\A
*---*
  • .

Definition. (Power set) Power set (or ) of set is the set of all subsets of , i.e., .

Example.

  • ;
  • (power set of an empty set is not an empty set).

Remark.

  • Power set of a set containing elements contains elements.

Definition. (-ary Cartesian product) The -ary Cartesian product over sets , denoted by , is

Example. Let and . Then,

  • ;
  • ;
  • .



Combinatorics

What is combinatorics?Edit

Combinatorics involves the counting and enumeration of elements of sets and similar structures such as sequences and multisets. We have discussed set theory in the chapter about set theory, and we will briefly discuss what is sequence and multiset.

Roughly speaking, a sequence is like a set, but ordering of elements matters, and a multiset is also like a set, but repetition of an element is allowed.

Sequence corresponds to the discussion about ordered selection without replacement, while multiset corresponds to the discussion about unordered selection with replacement.

Fundamental counting principlesEdit

Theorem.

Venn diagram illustrating inclusion-exclusion principle when (area of each (intersection of) set can be interpreted as its cardinality).

(Inclusion-exclusion principle) For each finite set ,

Proof. Idea:

  • To find the cardinality of the union of sets:
  1. Include the cardinalities of each of the sets.
  2. Exclude the cardinalities of the pairwise intersections (if needed).
  3. Include the cardinalities of the triple-wise intersections (if needed).
  4. Exclude the cardinalities of the quadruple-wise intersections (if needed).
  5. Include the cardinalities of the quintuple-wise intersections (if needed).
  6. Continue, until the cardinality of the -tuple-wise intersection is included (if is odd) or excluded (if is even).

Remark.

  • The formula can be written more compactly as
  • The definition of will be discussed later in this chapter.
  • The formula is usually used for the case and .
  • When , the formula becomes .
  • When , the formula becomes .
  • The name 'inclusion-exclusion principle' comes from the idea that the principle is based on over-generous inclusion, and then followed by compensating exclusion.

Example. Among 140 people, 110 of them speak at least one of English, French and German. Given that

  • 90, 30, 42 of them speak English, French, German respectively;
  • 23 speak English and French;
  • 25 speak English and German;
  • 16 speak French and German.

Then, the no. of people that speak English, French and German is .

Proof. Let , , be the set containing people speaking English, French and German respectively. Then, by inclusion-exclusion principle,

and the result follows.

Venn diagram

*----------------*
|90-13-12-11=54  | <---- E
|      *---------*--------------*
|      |25-12=13 |42-13-12-4=13 | <--- G
*------*---------*--------------*-----*
|      |   12    |16-12=4       |     |
|      *---------*--------------*     | <--- F
|  23-12=11      |  30-11-12-4=3      |
*----------------*--------------------*
  140-110=30

Clipboard

Exercise.

1 Calculate the no. of people that speak (a) English only; (b) French only; (c) German only.

(a) 90; (b) 30; (c) 42
(a) 54; (b) 13; (c) 3
(a) 54; (b) 11; (c) 3
(a) 54; (b) 3; (c) 11
None of the above.

2 Suppose people among the 140 people now learn to speak English. Calculate such that 123 of them speak at least one of English, French and German, and 20 people speak English, French and German now.

20
21
22
23
None of the above.

3 Continue from previous question. Calculate the no. of people who speak English and French now.

23
31
36
44
None of the above.



Theorem. (Multiplication counting principle) If trial has possible outcomes respectively, then the trials have possible outcomes.

Proof. First, consider the case for : we can enumerate each possible outcomes using ordered pair, as follows:

Then, we can count that there are possible outcomes (by considering the rows (or columns) one by one).

After establishing the case for , we can establish the case for positive integer inductively, e.g.:

then we can count that there are outcomes (by considering the rows (or columns) one by one), and we can prove the remaining cases inductively.

Remark.

  • It is also known as rule of product.

Example.

Figure 3. In set theory, this is called the Cartesian product
of two sets, with cardinality, ]]

The tree diagram of Figure 3 illustrates this for , , and . The number of possible outcomes is

Clipboard

Exercise.

1 Determine the number of possible outcomes if and instead.

1
2
3
6
12

2 Suppose now. Given that the number of possible outcomes is still 6 without changing other conditions given in the example, calculate .

1
2
3
6
12



Remark.

  • this might be visualized by imagining a flip of three-sided die (with three outcomes, e.g. 1,2,3), followed by a flip of a two-sided coin (with two outcomes, e.g. A,B).


Counting the number of elements in a power setEdit

Example. (Number of elements in a power set) The number of elements in a power set of set with elements is .

Proof. Consider the elements in one by one. For each of them, we can either include or do not include it in a subset of . Then, there are steps involved to construct a subset of , and each step has two outcomes. It follows from the multiplication counting principle that the steps have outcomes. That is, there are possible (distinct) subsets of . Since power set contains all subsets of by definition, it follows that the power set has elements.

Clipboard

Exercise.

Determine the number of elements in (i.e. power set of empty set).

0
1
2
4
It is undefined.



Remark.

  • is arbitrary nonnegative integer

The counting principle misusedEdit

Figure 4. The counting principle is not useful when the number of choices at each step is not unique.
Figure 5. The counting principle only applies if the outcomes are defined such that (i.e. order matters).

Figures 4 and 5 illustrate the fact that the counting principle is not always useful. Figure 4 calculates the ways the three integers can be added to five, if the integers are restricted to the set . Since these three integers are choices (decisions), it is convenient to label the choice indices with capital letters:

E.g., means the second choice is the integer 3.

We cannot apply the counting principle to figure 4 because depends on In our case, and and This leads us to an important caveat about using the counting principle:

  • the counting principle cannot be used if the number of outcomes at each step cannot be uniquely defined

Figure 5 exams two flips of a coin. It calculates the correct number of outcomes to be, but only if we carefully define the outcome. The counting principle is valid only if heads followed by a tails (HT) is a different outcome than tails followed by heads (TH). In other words:

  • when counting outcomes it is important to understand the role that order (enumeration) plays in defining outcomes

But, if we instead are counting the outcomes in a fashion such that HT and TH are considered to be the same, then a formula such as cannot be used:

  • the counting principle does not hold if two different decision paths lead to the same final outcome (in the theorem, we say 'trial , which implicitly assumes that the order matters in the outcomes for the trials)

Example. Suppose we throw two six-faced dice, with colors red and blue respectively. The number of possible distinct pairs of number facing up is .

Proof. Since the dice are distinguishable, we can use multiplication principle of counting. To be more precise, we can let the possible numbers facing up of red dice to be the possible outcomes in 'trial 1', and that of blue dice to be the possible outcomes in 'trial 2'. Since each trial has six outcomes, it follows that the number of outcomes (i.e. possible distinct pairs) is .

Clipboard

Exercise.

Suppose the red dice becomes a blue dice, such that the two dices are not distinguishable anymore. Calculate the number of possible distinct pairs of number facing up.

6
15
18
21
36




Number of ways to select some objects from distinguishable objectsEdit

In this section, we will discuss number (no.) of ways to select some objects from distinguishable objects, in four types, classified by whether the selection is ordered, and whether the selection is with replacement.

Before discussing these four types of selection, we will introduce some preliminary mathematical concepts used in the following.

Preliminary mathematical conceptsEdit

Definition. (Factorial) For each nonnegative integer , the factorial of , denoted by , is

More generally, we have gamma function.

Definition. (Gamma function) The gamma function is

in which .

Proposition. (Relationship between gamma function and factorial) For each nonnegative integer , .

Proof. Using integration by parts,

Since
for each nonnegative integer .

Remark.

  • The infinity in the proof can be regarded as extended real number, or be in limit sense.
  • Another more general result shown in the proof is that for each positive .

Definition. (Binomial coefficient) The binomial coefficient, indexed by nonegative integers and such that . denoted by , is

Theorem. (Binomial series theorem) For each real number ,

in which .

Remark. The following are some special cases of this theorem:

  • ;
  • ;
  • (negative binomial series);
  • (binomial series).

Theorem. (Binomial theorem) For each nonegative integer ,

Proof. It can be proved combinatorially or inductively. Complete proof is omitted.

The binomial theorem can be illustrated by Pascal's triangle:

Ordered selection without replacementEdit

Theorem. The no. of ways for ordered selection of objects from distinguishable objects without replacement is

Proof. Consider an equivalent situation: selecting objects from distinguishable objects to be put into ordered boxes, labelled box , in which each box contains at most one object. By considering the boxes from box 1 to box ,

  • for box 1, there are choices of object to be put into it
  • for box 2, there are choices of object to be put into it, since the object put into box 1 cannot be simultaneously put into box 2
  • ...
  • for box , there are choices of object to be put into it, since each of the objects put into box cannot be simultaneously put into box

Thus, by multiplication principle of counting, the desired no. of ways is

Remark.

  • is often denoted by (read n p r).

Example. The no. of distinct ways to select 3 objects to be put into 3 boxes, labelled and from 5 objects, labelled and is

Clipboard

Exercise.

1 After putting the 3 objects into the 3 boxes, 2 of them are taken out and put into 2 boxes, labelled and . Calculate the no. of distinct ways to do this.

3
6
60
180
360

2 Continue from previous question. Suppose only 1 of them is taken out and put into 1 box, labelled , now. Calculate the no. of distinct ways to do this.

3
6
60
180
360



Example. (Competition) There are candidates for a competition. The no. of ways to award winner, 1st and 2nd runners-up is

If, Amy and Bob are among the candidates, and it is given that Amy is awarded 1st runner-up, while Bob does not receive any award, the no. of ways to award winner, 1st and 2nd runners-up becomes . In particular, Amy and Bob cannot be awarded winner or 2nd runner-up.

Clipboard

Exercise.

1 Suppose Chris is also among the candidates. Given that Amy, Bob and Chris receive an award from the competition, calculate the no. of ways to award winner, 1st and 2nd-runners up.

1
3
6
32
96

2 Continue from previous question. Given that Amy, Bob and Chris do not receive any award from the competition, calculate the no. of ways to award winner, 1st and 2nd-runners up.

1716
2496
3354
3357
3359




A special case of ordered selection without replacement is when the no. of selected objects equals the no. of objects to be selected. In this case, this selection is called permutation, and the no. of ways for permutation of objects (i.e. ordered selection of objects from objects) is .

Example.

Figure 6. 3!=3·2·1= 6 permutations of {1,2,3}.

The 6 ways to permute the string 123 are shown in Figure 6.

Unordered selection of distinguishable objects without replacementEdit

Theorem. The no. of ways for unordered selection to select objects from distinguishable objects without replacement is .

Proof. There are two ways to prove this.

First, consider an equivalent situation: selecting objects from distinguishable objects without replacement to be put into one box [8]. Then, we consider the no. of ways to do this in order, and then remove some ways that are regarded to be the same for unordered selection (i.e. regarded as the same when we put the objects into one box). The no. of ways to do this in order is (choice means the th selection of objects to be put into the box)

Among these ways, putting the same objects into the box in different orders counts as different ways, and we need to merge them together, into one way. To merge them, we consider how many different ways are counted for putting the same objects into the box in different orders. Indeed, this is permutation (ordered selection of objects from distinguishable objects), so the no. of different ways is . So, we count extra times of no. of ways (i.e. scale up the no. of ways by a factor ) , and thus we need to scale down the no. of ways, by dividing the no. by . Thus, the desired no. of ways is .

Second, we use the notion of generating function, by encoding the selection process into a binomial series, and then use the coefficients to determine the desired no. of ways. To be more precise, recall a special case of binomial series theorem:

By encoding each selection to each of , through treating the and in the th as not selecting the object and selecting the object respectively, the coefficient of is the desired no. of ways, since it is the no. of ways to build , through selecting in 's, and selecting 1 in other 's (i.e. selecting objects, regardless of the order). Thus, the desired no. of ways is .

Remark.

  • The unordered selection without replacement is also known as combination.
  • is read as 'n choose r', or 'n c r'.

Example.

Figure 8: is shown for and The dotted ellipses remind us that these are sets, where order is not important.

For combination, the order in which the items are selected are not important, so each selection from a set can be regarded as a subset of the original set. Figure 8 illustrates for the set The number of elements in this set is From our earlier discussion of the power set, we know that the total number of subsets is . All 8 subsets are shown in the figure, organized by how many items are in each subset (for example, the subset in the upper-left corner contains 3 elements, while all subsets with 2 elements occupy the lower-right corner.) Let denote the number of elements "chosen" to be in each of the 8 subsets of set (where the number of elements in is, .)

  • set has elements. It is the empty set: .
  • sets have element. They are ,,and .
  • sets have elements. They are ,,and .
  • set has elements. It is the set itself: .

Example.

Figure 9: combinations.

No. of ways to select 2 objects from 4 distinguishable objects without considering the order is

Example. (Competition) There are 16 candidates for a competition. The no. of ways to select 3 candidates to enter final is

Clipboard

Exercise.

1 Amy, Bob and Chris are among the candidates. Calculate the no. of ways to select them to enter final.

1
3
6
32
96

2 Continue from the previous question. Calculate the no. of ways to select candidates other than Amy, Bob and Chris to enter final.

220
286
554
557
559



Special cases worth rememberingEdit

The formula for counting combinations has special cases that are worth remembering:

  • (There is only one way to pick no thing and only one way to pick all things.)
  • (there are n ways to pick one thing or to leave one thing out)
  • (There are the same number of ways of picking of things as there are of leaving out of things)

Ordered selection of distinguishable objects with replacementEdit

Theorem. The no. of ways for selecting objects from distinguishable objects in order, with replacement is .

Proof. Consider the equivalent situation: selecting objects from types of objects, in which each type of the objects has unlimited stock, to be put into ordered boxes (the same object may be selected more than once). Then, the no. of ways is , since for each box, there are types of objects that can be selected to be put into it.

Remark.

  • can be greater than .

Example. (Setting password) The number of ways to set a password with 6 characters, with the following rules:

(R1) numbers are allowed
(R2) alphabets are allowed, and they are case-sensitive [9]
(R3) special characters (i.e. all characters other than numbers and alphabets) are not allowed

is

Proof. For each of the 6 positions available for the password, there are choices of characters. Also, the characters can be repeated in more than one positions, and order matters. So, this is a case of ordered selection of distinguishable objects with replacement. Thus, the desired number is

Clipboard

Exercise.

1 Suppose a machine can have password guesses per second. Approximate the maximum time needed for the machine to guess the six-character password correctly using the formula

(correct to two decimal places)

0.00 seconds
0.15 seconds
0.16 seconds
6.16 seconds
369.72 seconds

2 Suppose the password is safe if the maximum time needed for the machine to guess the password correctly, using the same formula in the previous question, is greater than or equal to 100 years (i.e. seconds). The minimum number of characters needed for the password (with the same rules) to be safe is

not greater than 10.
greater than 10 but not greater than 15.
greater than 10 but not greater than 20.
greater than 20 but not greater than 25.
greater than 25.




Unordered selection of distinguishable objects with replacementEdit

This type of selection is probably the most complicated.

Theorem. The number of ways for unordered selection of objects from distinguishable objects with replacement is .

Proof. There are two ways to prove this.

First, consider an equivalent situation: selecting objects from types of objects, in which each type of the objects has unlimited stock, to be put into one box (the same object may be selected more than once). Then, we use the stars and bars notation: e.g.

*|**|*|...|*||**|*

in which th gap created by the bars corresponds to the th type of object (the leftmost gap made by one bar is the 1st gap, the rightmost gap made by one bar is the last gap), and the number of * in each gaps represents the number of objects selected for the corresponding type of objects. E.g., 2 * in 2nd gap represents the 2 objects are selected from the 2nd type of objects. Then, the desired no. of ways is the no. of arrangements of * and bars [10], which is the no. of ways to select from positions for * [11] (order does not matter), calculated by .

Second, we use the notion of generating function, by encoding the selection as follows:

  • encoding the selection of each type of objects to , by treating , , , etc. (up to in the th as selecting 0, 1, 2, etc. (up to ) objects from the th type respectively

Then, the desired no. is the coefficient of in

[12] By binomial series theorem, the coefficient of is

Remark.

  • can be greater than .
  • is often denoted by (read 'n h r')

Example. There are 8 distinct food or drink items, namely hamburger, egg, fries, cake, apple pie, apple juice, orange juice and coke. The number of distinct 4-item combos that must consist of distinct items (unordered selection without replacement) is , and that without restrictions (particularly, may consist of more than one same item) (unordered selection with replacement) is .

Clipboard

Exercise.

1 Calculate the no. of distinct 4-item combos that must consist of 3 food items (which may be the same) and 1 drink item.

12
35
38
105
330

2 Calculate the no. of distinct 4-item combos that contain no drinks.

5
70
126
280
330

3 Suppose each food or drink item only has 2 left in the stock. Calculate the no. of distinct 4-item combos without restrictions. (Hint: )

19
45
183
266
330

4 Suppose each food costs $10, while each drink costs $5. Calculate the no. of distinct $20 combos without restrictions. (Hint: calculator should be used to ease the calculation)

15
30
60
121
225

5 Amy loves eating hamburger very much, so she must choose two hamburgers when she chooses the items for the 4-item combos. Calculate the no. of distinct ways for Amy to order a 4-item combo without restriction.

6
15
28
36
210



Example. (Number of integer solutions of a equation) The number of solutions to

in which are nonegative integers, is .

Proof. Consider the following stars and bars graph:

|**|*|**|*|**|**

in which the no. of stars is 10, corresponding to the number at RHS of the equation, and no. of gaps created by the bars is 7, corresponding to the number of unknowns at LHS of the equation. The no. of stars in each gap represents the (nonnegative) number assigned to that unknown. So, the number of solutions is the no. of arrangements of these stars and bars, namely

Alternatively, we can interpret there are 10 (no. at RHS) balls selected from 7 (no. of unknowns at LHS) types of balls, labelled , with unlimited stock, to be put into a box, in which the number of balls labelled in the box represents the number assigned for the unknowns respectively. Then, the no. of solutions is the no. of ways to do this, namely .

Clipboard

Exercise.

1 Calculate the number of solutions if are positive integers instead. (Hint: letting , then is a nonnegative integer)

28
56
84
560
5005

2 Calculate the number of solutions if the '' sign is changed to '' sign, i.e. the number of solutions to in which are nonnegative integers. (Hint: add one more positive integer unknown to LHS, so that the '' sign becomes '' sign)

3003
5005
6435
11440
19448



SummaryEdit

Selecting [13] objects from distinguishable objects
with replacement without replacement
ordered
unordered
Clipboard

Exercise. Try to prove each of the above formulas, without looking the previous subsections. After that, you can compare your proofs against the proofs in the previous subsections.


PartitionsEdit

Theorem. The number of ways to partition distinguishable objects into groups with group containing exactly objects respectively (order does not matter) is .

Proof. There are two ways to prove this.

First, consider an equivalent situation: putting objects selected from distinguishable objects into box respectively.

Then, consider the boxes one by one:

  • box 1: objects selected from distinguishable objects to be put into it, so no. of ways is
  • box 2: objects selected from distinguishable objects [14], so no. of ways is
  • ...
  • box : objects selected from [15] to be put into it, so no. of ways is

By multiplication principle of counting, the no. of ways for the whole process is

Second, we use the notion of generating function, by encoding the partition process as follows:

  • in the th , represents the th object is put into box respectively

Then, the desired no. of ways is the coefficient of in , which is , by multinomial theorem (generalized version of binomial theorem) [16].

Remark.

  • partitioning objects into two groups is the same as unordered selection without replacement [17]
  • is called the multinomial coefficient, and is denoted by


Example. (Sequence of dice outcomes) A six-faced dice is rolled nine times. The number of distinct sequences in which 1,3 and 5 each comes up three times is.

Proof. Consider this situation as the partition of the nine (ordered) outcomes from the die to three groups, which represents 1,3 and 5 comes up in that outcome respectively. The three groups contains 3 outcomes each, so that each odd number comes up three times. It follows that the number of ways to partition the outcomes is .

For each partition of outcomes into different groups, we obtain a unique sequence of outcomes. [18]

Clipboard

Exercise.

1 Calculate the number of distinct sequences in which 2,4 and 6 each comes up three times instead.

840
1680
3360
6720
361200

2 Suppose we throw the die 12 times instead. Calculate the number of distinct sequences in which each number comes up two times.

2520
5040
113400
369600
7484400



Example. (Arrangement of letters) The number of letter arrangements of the word PROBABILITY is .

Proof. The word PROBABILITY has 2 letter B's and 2 letter I's. For other letters, they appear only once. So, we partition the 11 letter positions in the word into 9 groups, representing letter P,R,O,B,A,I,L,T and Y, respectively, and the group representing letter B and I contain 2 letter positions each, and other groups contain 1 letter position each.

Clipboard

Exercise.

1 Calculate the number of letter arrangements of the word EXERCISE.

28
56
3360
6720
20160

2 Calculate number of number arrangements of the number 171237615, such that the number formed is a odd number.

6720
11760
13440
23520
161280



Example. (Walking path) Consider the following diagram.

Suppose we are initially located at , and that we can only walk either one cell rightward or one cell downward for each step. The number of distinct sequence of steps such that we can walk from to is

Proof. First, observe that we need 6 and only 6 steps to walk from to [19], consisting 4 steps of walking rightward (