Applicable Mathematics/Probability

Home - 1 - 2 - 3 - 4 - 5 - 6- 7 - 8 - 9 - 10



Probability is a way of expressing an expectation about the likelihood of an "event" occurring in an "experiment", based on whatever information is available either about the mechanism which lies behind the experiment (theoretical probability) or knowledge of previous events (experimental probability).

An example of a random experiment and often used in Maths puzzles is the birth of a child, in which the gender of the child cannot be known beforehand and is usually deemed to be 50/50 boy versus girl. In this case, the experiment is the birth, the possible "outcomes" are "girl" and "boy". The word "event" usually refers to the particular outcome we consider a "success" (which simply means the type of outcome whose probability we wish to calculate). Often, the "outcomes" are expressed as the result of a SERIES of experiments ("What are the chances of throwing heads twice in two tosses of the same coin").

The Measure of Probability

edit

The normal measure of probability is a number between 0 and 1, where 0 means "impossible" (not quite true, see below), .5 means "as likely as not" and 1 means "certain to happen". For example, the probability of tossing a head" is usually taken to be 0.5. In everyday language, the number is more likely to be expressed as a fraction (1/2 or "one in two") or a percentage ("50% chance"). When throwing a die, the probability of getting a 1 is 1/6 (i.e. we expect that in a large number of throws, the frequency of "1s" will be close to 1/6 of the number of throws).

Two kinds of Probability

edit
  • Theoretical Probability - Based on a knowledge of the mechanism behind the event. It is simplest to calculate when there are a finite number of possible outcomes, all believed to be equally likely. The probability of an event is simply the number of outcomes which would match our "event", divided by the total number of possible outcomes. On a die, there are 6 possible (and equally-likely) outcomes for each roll, but only one of these outcomes results in a 1. Therefore we calculate the probability of rolling a 1 as 1/6. This is the most common type of probability used in math classes. The usual mistake made in calculating probability this way is not checking carefully whether you are counting "equally likely" outcomes. (To be silly, one might say "there are two possible outcomes ; 1 or not 1, therefore the chance of rolling a 1 is 1/2"). The more common reason for falsely believing asset of outcomes to be equally likely is either that the dice or coin is "loaded" or that you have missed a possible outcome, or confused two separate outcomes as a single one. For instance, if the coin is thick enough, the possibility that the coin can land stably on its edge should not be ignored. When tossing TWO coins, many people think there are three equally likely outcomes (two heads, two tails, one of each) and that therefore the chance of getting "one of each" is 1/3: but actually there are two different ways of getting a head and a tail, so the chance of getting "one of each" is actually 2/4 = 50%. If you are sure that you have calculated the probability correctly then you can ascribe a single number to the probability of the event, but the only way to be sure that you have not missed a possibility, or done the sums wrong, is to go ahead and try it lots of time and see what happens. But you are then into "Experimental Probability".
  • Experimental Probability - Based on experience. This is less prone to mistakes (provided the experimental conditions are fair) and easy to calculate the result (number of successful trials divided by total number of trials). The larger the number of times that you perform the test, the closer you will come to the theoretical probability (provided that you calculated the theoretical probability correctly!), but the greater the chance that the result will not be EXACTLY the same.

In Maths classes, the first type of probability is more common, in science, the second.

Meaning of Probability

edit

Repeated Events

edit

Where an event can be repeated in the same way as often as desired, then not only is it clear what the calculated probability means ("I expect that if I toss this coin 500 times, I will see about 250 heads"), but also it will be possible to check whether the estimate was correct (toss the coin and see how many times you get a head).

One-offs

edit

Where a trial (or event) is a one-off (neither this trial, nor a very similar one can be done again) then not only must the calculation be a theoretical one, but also it is less clear what the calculated figure MEANS. "There is a 50% chance of the Earth being destroyed by collision with a comet during this year". Either the Earth will or will not be destroyed, neither outcome will show you whether your calculated probability was correct. Actually, you CAN run the trial as many times as you like (treat each year as one trial), but there will be a maximum of ONE "successful" outcome, so it is not easy to see what light 1 or 2 or 3 or 4 destruction-free years sheds on your calculation. If, however, you have the opportunity to gamble on many different one-off events, and experience gives you good reason to trust your probability estimates, then probability does have some meaning: you would be wise in future wagers to bet on events to which you ascribe high probability and better to avoid improbable ones (except at very favorable odds). This is actually similar to horse-racing - you are encouraged to believe you can calculate the odds of a horse winning, but most races are really one-offs run in very different circumstances to all the other races. If you only bet on one race in your life, then you are unlikely to back a real outsider at whatever odds - but if you spend a lot of time at racecourses, you may probably back quite a lot of outsiders, but only in the cases where you believe that the high odds against them are more generous then necessary - expecting (perhaps unwisely, since it is the bookies, not you, who set the odds) that the few that win will pay enough to outweigh your many small losses on the ones that did not.

Outcomes and sample spaces

edit

When doing experiments, the answer is not known prior to the actual experiment. However, we should know what we can get. The possible results of an experiment are called outcomes. A certain group of outcomes is called a sample space.

When you throw a die, you can get 1, 2, 3, 4, 5 or 6 eyes. These are the possible outcomes in a throw. the sample space is U = {1, 2, 3, 4, 5, 6}. When we're looking at whether a birth results in boy(B) or girl(G), the sample space is U = {B, G}

Model of probability

edit

A model of probability gives the probability of every outcome a number between 1 and 0. The sum of the probability of all possible outcomes is 1 (if you cannot make them add up to 1, then you have missed or double-counted some possibilities).

Let's for instance take a die-throw as an example. The model of probability for that would be:

P(1) = P(2) = P(3) = P(4) = P(5) = P(6) = 1/6

However, when we're looking at whether a birth results in a boy or a girl, the model of probability results in:

P(B) = 0.514 and P(G) = 0.486

In the first example, all the outcomes have the same probability - the model is uniform. In the second example, the outcomes have different probabilities - the model is not uniform.

See also

edit