©2009 2010 2011 2012 Jennifer Harlow, Dominic Lee and Raazesh Sainudiin.
Creative Commons AttributionNoncommercialShare Alike 3.0
An experiment is an activity or procedure that produces distinct or welldefined outcomes. The set of such outcomes is called the sample space of the experiment. The sample space is usually denoted with the symbol $\Omega$.
An event is a subset of the sample space.
Probability is a function that assigns numbers in the range 0 to 1 to events
$P : \text{set of events } \rightarrow [0,1]$
which satisfies the following axioms:
\[ \begin{array}{lcl}
\underbrace{P\left(\bigcup_{i=1}^{\infty}A_i\right)} &=& \underbrace{\sum_{i=1}^{\infty}P\left(A_i\right)} \\
A_1 \cup A_2 \cup A_3 \dots &=& P(A_1) + P(A_2) + P(A_3) + \ldots
\end{array} \]
$P(A) = 1  P(A^c)$, where $A^c = \Omega \setminus A$
For any two events $A$, $B$,
$P(A \cup B) = P(A) + P(B)  P(A \cap B)$
The idea in Property 2 generalises to the "inclusionexclusion" formula.
Let $A_1, A_2, \ldots, A_n$ be any $n$ events. Then,
$$ \begin{array}{lcl} P\left(\bigcup_{i=1}^n A_i \right) &=& \sum_{i=1}^nP(A_i) \,  \, \sum_{i<j}P(A_i \cap A_j) \\ &\,& \quad + \, \sum_{a<j<k}P(A_i \cap A_j \cap A_k) + \ldots + (1)^{n+1}P(A_1 \cap A_2 \cap \ldots \cap A_n) \end{array} $$
In words, we take all the possible intersections of one, two, three, $\ldots$, $n$ events and let the signs alternate.
Does the inclusionexclusion formula agree with the extended Axiom 3: If $A_1, A_2, \ldots, A_n$ are pairwise disjoint events then $P\left( \bigcup_{i=1}^n A_i \right) = \sum_{i=1}^nP(A_i)$
What exactly is stipulated by the axioms about the domain of the probability function?
The domain should be a sigma field ($\sigma$field) or sigma algebra ($\sigma$algebra), denoted $\sigma(\Omega)$ or $\mathcal{F}$ such that:
Thus the domain of the probability is not just any old set of events (recall events are subsets of $\Omega$), but rather a set of events that form a $\sigma$field that contains the sample space $\Omega$, is closed under complementation and countable union.
$\left(\Omega, \mathcal{F}(\Omega), P\right)$ is called a probability space or probability triple.
Let $\Omega = \{H, T\}$. What $\sigma$fields could we have?
$\mathcal{F}\left({\Omega}\right) = \{\{H, T\}, \emptyset, \{H\}, \{T\}\}$ is the finest $\sigma$field.
$\mathcal{F}'\left({\Omega}\right) = \{ \{H, T\}, \emptyset \}$ is a trivial $\sigma$field.
Let $\Omega = \{\omega_1, \ldots, \omega_n\}$.
$\mathcal{F}\left({\Omega}\right) = 2^\Omega$, the set of all subsets of $\Omega$, also known as the power set of $\Omega$.
$\vert 2^\Omega \vert = 2^n$.
Two events $A$ and $B$ are independent if $P(A \cap B) = P(A)P(B)$.
Intuitively, $A$ and $B$ are independent if the occurrence of $A$ has no influence on the probability of the occurrence of $B$ (and vice versa).
Flip a fair coin twice. Event $A$ is the event "The first flip is 'H'"; event $B$ is the event "The second flip is 'H'".
$P(A) = \frac{1}{2}$, $P(B) = \frac{1}{2}$
Because the flips are independent (what happens on the first flip does not influence the second flip),
$P(A \cap B) = \frac{1}{2} \times \frac{1}{2} = \frac{1}{4}$.
We can generalise this by saying that we will flip a coin with an unknown probability parameter $\theta \in [0,1]$. We flip this coin twice and the coin is made so that for any flip, $P(\mbox{'H'}) = \theta$, $P(\mbox{'T'}) = 1\theta$.
Take the same events as before: event $A$ is the event "The first flip is 'H'"; event $B$ is the event "The second flip is 'H'".
Because the flips are independent,
$P(A \cap B) = \theta \times \theta = \theta^2$.
If we take event $C$ as the event "The second flip is 'T'", then
$P(A \cap C) = \theta \times (1\theta)$.
Roll a fair die twice. The face of the die is enumerated 1, 2, 3, 4, 5, 6.
Event $A$ is the event "The first roll is 5"; event $B$ is the event "The second roll is 1".
$P(A) = \frac{1}{6}$, $P(B) = \frac{1}{6}$
If the two rolls are independent,
$P(A \cap B) = \frac{1}{6} \times \frac{1}{6} = \frac{1}{36}$
You try at home
Suppose you roll two fair dice independently. What is the probability of getting the sum of the outcomes to be seven?
Solution: Please do this at home or with earphones! Watch the Khan Academy movie about probability and two dice.
Suppose that we are told that the event $A$ with $P(A) > 0$ occurs and we are interested in whether another event $B$ will now occur. The sample space has shrunk from $\Omega$ to $A$. The probability that event $B$ will occur given that $A$ has already occurred is defined by
$$P(BA) = \frac{P(B \cap A)}{P(A)}$$
We can understand this by noting that
If the two events $A$ and $B$ are independent then
$$P(BA) = \frac{P(B \cap A)}{P(A)} = \frac{P(B)P(A)}{P(A)} = P(B)$$
which makes sense  we said that if two events are independent, then the occurrence of $A$ has no influence on the probability of the occurrence of $B$.
Roll two fair dice.
Event $A$ is the event "The sum of the dice is 6"; event $B$ is the event "The first die is 2".
How many ways can we get a 6 out of two dice?
$A = \{(1,5), (2,4), (3,3), (4,2), (5,1)\}$
$P(A) = \frac{1}{36} + \frac{1}{36} + \frac{1}{36} + \frac{1}{36} + \frac{1}{36} = \frac{5}{36}$
$B = \{(2,1), (2,2), (2,3), (2,4), (2,5), (2,6)\}$
$B \cap A = \{ (2,4)\}$
$P(B \cap A) = \frac{1}{36}$
$P(BA) = \frac{P(B \cap A)}{P(A)} = \frac{\frac{1}{36}}{\frac{5}{36}} = \frac{1}{5}$
Look at this result in terms of what we said about the sample space shrinking to $A$.
We just saw that $P(BA)$, the conditional probability that event $B$ will occur given that $A$ has already occurred is defined by $P(B \cap A)/P(A)$. By using the fact that $B \cap A = A \cap B$ and reapplying the definition of conditional probability to $P(AB)=P(A \cap B)/P(B)$, we get the socalled Bayes theorem.
$$\boxed{P(BA) = \frac{P(B \cap A)}{P(A)} = \frac{P(A \cap B)}{P(A)} = \frac{P(AB) P(B)}{P(A)}}$$
You try at home
Suppose we have a bag of ten coins. Nine of the ten coins are fair but one of the coins has heads on both sides. What is the probability of getting five heads in a row if I picked a coin at random from the bag and flipped it five times? If I obtained five heads in a row by choosing a coin out of the bag at random and flipping it five times, then what is the probability that I have picked the twoheaded coin?
Solution: Please do this at home or with earphones! Watch the Khan Academy movies about applications of conditional probability and Bayes theorem to this bag of 10 coins.
You try
The next cell uses a function called randint which we will talk about more later in the course. For this week we'll just use randint as a computerised way of rolling a die: every time we call randint(1,6) we will get some integer number from 1 to 6, we won't be able to predict in advance what we will get, and the probability of each of the numbers 1, 2, 3, 4, 5, 6 is equal. Here we use randint to simulate the experiment of tossing two dice. The sample space $\Omega$ is all 36 possible ordered pairs $(1,1), \ldots (6,6)$. We print out the results for each die. Try evaluating the cell several times and notice how the numbers you get differ each time.



A random variable is a mapping from the sample space $\Omega$ to the set of real numbers $\mathbb{R}$. In other words, it is a numerical value determined by the outcome of the experiment.
This is not as complicated as it sounds: let's look at a simple example:
Roll two fair dice.
The sample space is the set of 36 ordered pairs $\Omega = \{(1,1), (1,2), \dots, (2,1), (2,2), \ldots, (1,6), \ldots, (6,6)\}$
Let random variable $X$ be the sum of the two numbers that appear, $X : \Omega \rightarrow \mathbb{R}$.
For example, $X\left(\{(6,6)\}\right) = 12$
$P(X=12) = P\left(\{(6,6)\}\right)$
And, $X\left( \{ (3,2) \}\right) = 5$
Let $\left(\Omega, \mathcal{F}, P \right)$ be some probability triple. Then a random variable, say $X$, is a function from the sample space $\Omega$ to the set of real numbers $\mathbb{R}$
$$X: \Omega \rightarrow \mathbb{R}$$
such that for every number $x$ in $\mathbb{R}$, the inverse image of the halfopen interval $(\infty, x]$ is an element of the collection of events $\mathcal{F}$, i.e.,
for every number $x$ in $\mathbb{R}$, $$X^{[1]}\left( (\infty, x] \right) := \{\omega: X(\omega) \le x\} \in \mathcal{F}$$
A random variable is said to be discrete when it can take a countable sequence of values (a finite set is countable). The three examples below are discrete random variables.
Finally, we assign probability to a random variable $X$ as follows:
$$P(X \le x) = P \left( X^{[1]}\left( (\infty, x] \right) \right) := P\left( \{ \omega: X(\omega) \le x \} \right)$$
The distribution function (DF) or cumulative distribution function (CDF) of any RV $X$, denoted by $F$ is:
$$F(x) := P(X \leq x) = P\left( \{ \omega: X(\omega) \leq x \} \right) \mbox{, for any } x \in \mathbb{R}$$
In our example above (tossing two die and taking $X$ as the sum of the numbers shown) we said that $X\left((3,2)\right) = 5$, but (3,2) is not the only outcome that $X$ maps to 5: $X^{[1]}\left(5\right) = \{(1,4), (2,3), (3,2), (4,1)\}$
\[ \begin{array}{lcl} P(X=5) & = & P\left(\{\omega: X(\omega) = 5\}\right)\\ & = & P\left(X^{[1]}\left(5\right)\right)\\ & = & P(\{(1,4), (2,3), (3,2), (4,1)\})\\ \end{array} \]
Remember our "wellmixed" fruit bowl containing 3 apples, 2 oranges, 1 lemon? If our experiment is to take a piece of fruit from the bowl and the outcome is the kind of fruit we take, then we saw that $\Omega = \{\mbox{apple}, \mbox{orange}, \mbox{lemon}\}$.
Define a random variable $Y$ to give each kind of fruit a numerical value: $Y(\mbox{apple}) = 1$, $Y(\mbox{orange}) = 0$, $Y(\mbox{lemon}) = 0$.
Flip a fair coin until a 'H' appears. Let $X$ be the number of times we have to flip the coin until the first 'H'.
$\Omega = \{\mbox{H}, \mbox{TH}, \mbox{TTH}, \ldots, \mbox{TTTTTTTTTH}, \ldots \}$
$X(\mbox{H}) = 0$, $X(\mbox{TH}) = 1$, $X(\mbox{TTH}) = 2$, $\ldots$
You try at home
Consider the example above of 'Pick a Fruit at Random'. We defined a random variable $Y$ there as $Y(\mbox{apple}) = 1$, $Y(\mbox{orange}) = 0$, $Y(\mbox{lemon}) = 0$. Using step by step arguments as done in the example of 'Sum of Two Dice' above, find the following probabilities for our random variable $Y$:
\[ \begin{array}{lcl}
P(Y=0) & = & P\left(\{\omega: Y(\omega) = \quad \}\right)\\
& = & P\left(Y^{[1]} \left( \quad \right)\right)\\
&= & P(\{\quad , \quad \})\\
\end{array}
\]
Please do this at home or with earphones! Watch the Khan Academy movie about random variables.
When we introduced the subject of probability, we said that many famous people had become interested in it from the point of view of gambling. Games of dice are one of the earliest forms of gambling (probably deriving from an even earlier game which involved throwing animal 'ankle' bones or astragali). Galileo was one of those who wrote about dice, including an important piece which explained what many experienced gamblers had sensed but had not been able to formalise  the difference between the probability of throwing a 9 and the probability of throwing a 10 with two dice. You should be able to see why this is from our map above. If you are interested you can read a translation (Galileo wrote in Latin) of Galileo's Sorpa le Scoperte Dei Dadi. This is also printed in a nice book, Games, Gods and Gambling by F.N. David (originally published 1962, newer editions now available).
You try
We are going to use Sage with the fruit bowl example, and the random variable $X$ to give each kind of fruit a numerical value: $X(\mbox{apple}) = 1$, $X(\mbox{orange}) = X(\mbox{lemon}) = 0$ This is a discrete random variable because it can only take a finite number of discrete values (in this case, either 1 or 0).
We have made our own random variable map object in Sage called RV. As with the Sage probability maps we looked at last week, it is based on a map or dictionary. We specify the sample the samplespace and probabilities and the random variable map, MapX, from the samplespace to the values taken by the random variable $X$).

To make an RV, we can specify the lists for the sample space, the probabilities, and the random variable values associated with each outcome. Since there are three different lists here, we can make things clearer by actually saying what each list is. The RV we create in the cell below is going to be called X.

(You don't have to worry about how RV works: it is our 'homemade' class for you to try out.)
We can get probabilities using the syntax X.P(x) to find $P(X=x)$. 'Syntax' is a way of saying how you instruct the computer to do what you want: the syntax you use for the computer maybe different to the way that you'd write the expressions in your lecture notes.


You have seen that different random variables can be defined on the same probability space, i.e., the sample space and the associated probability map, depending on how the outcomes are mapped to real values taken by the random variable. Usually there is some good experimental or mathematical reason for the particular random variable (i.e., eventtovaluemappings) that we use. In the experiment we just did we could have been an experimenter particularly interested in citrus fruit but not concerned with what particular kind of citrus it is.
On the other hand, what if we want to differentiate between each fruit in the sample space? Then we could give each fruitoutcome a different value.




An experiment that is used a lot in examples is the coin toss experiment. If we toss a fair coin once, the sample space is either head (denoted here as H) or tail (T). The probability of a head is a half and the probability of tail is a half.
We can do the probability map for this as one of our ProbyMap objects.

Let's have a random variable called oneHead that takes the value 1 if we get one head, 0 otherwise and simulate this with an RV object.
.


One toss is not very interesting. What if we we have a sample space that is the possible outcomes of two independent tosses of the same coin?

Tossing the coin twice and looking at the results of each toss in order is a different experiment to tossing the coin once.
We have a different set of possible outcomes, {HH, HT, TH, TT}. Note that the order matters: getting a head in the first toss and a tail in the second (HT) is a different event to getting a tail in the first toss and a head in the second (TH).
We can define a different random variable on this set of outcomes. Let's take an experimenter who is particularly interested in the number of heads we get in two tosses.

As you can see, two different events in the sample space, a head and then a tail (HT) and a tail and then a head (TH) both give this random variable a value of 1. The event HH gives it a value 2 (2 heads) and the event TT gives it a value 0 (no heads).
Now we can try the probabilities.




The indicator function of an event $A \in \mathcal{F}$, denoted $\mathbf{1}_A$, is defined as follows:
\[\begin{equation}
\mathbf{1}_A(\omega) :=
\begin{cases}
1 & \qquad \text{if} \quad \omega \in A \\
0 & \qquad \text{if} \quad \omega \notin A
\end{cases}
\end{equation}\]
The indicator function $\mathbf{1}_A$ is really an RV.
"Will it rain tomorrow in the Southern Alps?" can be formulated as the RV given by the indicator function of the event "rain drops fall on the Southern Alps tomorrow". Can you imagine what the $\omega$'s in the sample space $\Omega$ can be?
Recall that a discrete RV $X$ takes on at most countably many values in $\mathbb{R}$.
The probability mass function (PMF) $f$ of a discrete RV $X$ is :
$$f(x) := P(X=x) = P\left(\{\omega: X(\omega) = x \}\right)$$
The Bernoulli RV is a $\theta$parameterised family of $\mathbf{1}_A$.
Take an event $A$. The parameter $\theta$ (pronounced 'theta) denotes the probability that "$A$ occurs", i.e., $P(A) = \theta$.
The indicator function $\mathbf{1}_A$ of "$A$ occurs" is the $Bernoulli(\theta)$ RV.
Given a parameter $\theta \in [0,1]$, the probability mass function (PMF) for the $Bernoulli(\theta)$ RV $X$ is:
\[\begin{equation}
f(x;\theta)= \theta^x (1\theta)^{1x} \mathbf{1}_{\{0,1\}}(x) =
\begin{cases}
\theta & \text{if $x=1$,}\\
1\theta & \text{if $x=0$,}\\
0 & \text{otherwise}
\end{cases}
\end{equation}\]
and its DF is:
\[\begin{equation}
F(x;\theta) =
\begin{cases}
1 & \text{if }$1 \le x$\text{,}\\
1\theta & \text{if } 0 \le x < 1\text{,}\\
0 & \text{otherwise}
\end{cases}
\end{equation}\]
We emphasise the dependence of the probabilities on the parameter $\theta$ by specifying it following the semicolon in the argument for $f$ and $F$ and by subscripting the probabilities, i.e. $P_{\theta}(X=1)=\theta$ and $P_{\theta}(X=0)=1\theta$.
'For loops' are a very useful way that most computer programming languages provide to allow us to repeat the same action. In Sage, you can use a for loop on a list to just go through the elements in the list, doing something with each element in turn.
The SAGE syntax for a 'for loop' is:
Start with the keyword for
Then we use a kind of temporary variable which is going to take each value in the list, one by one, in order. In the example below we use the name x for this variable and write for x in myList.
After we have specified what we are looping through, we end the line with a colon : before continuing to the next line.
Now, we are ready to write the body of the for loop. In the body, we put whatever we want to actually do with each value as we loop through. Remember when we defined a function, and the function body was indented? It's the same here: the body is a block of code which is indented (the Sage standard indentation for block is 4 spaces).
Indicate to Sage when your for loop has ended by moving the beginning of the line following the for loop back so that is it aligned with the for which started the whole loop.
The cell below gives a very simple example using a list.

If we wanted to do this for any list, we could write a simple function that takes any list we give it and puts it through a for loop, like the function below.

Notice that we start indenting with 4 spaces when we write the function body. Then, when we have the for loop inside the function body, we just indent the body of the for loop again.
Sage needs the indentation to help it to know what is in a function, or a for loop, and what is outside, but indentation also helps us as programmers to be able to look at a piece of code and easily see what is going on.
Let's try our function on another list.

We have just programmed a basic for loop with a list. We can do much more than this with for loops, but this covers the essentials that we need to know. The important thing to remember is that whenever you have a list, you can easily write a for loop to go through each element in turn and do something with it.
You try
Example 3: For loops
Try first assigning the value 0 to a variable named mySum and then making yourself a list (you pick what it is called and what values it contains) and then looping through the list, adding each element in the list to mySum. When the loop has finished, the value of mySum will be the accumulated value of all the elements in the list.


What about defining a function to accumulate the values in a list? Remember to give your function a good name, and include the docstring to tell everyone what it does.

Try out your function!

A for loop can be used on more than just a list. For example, try a loop with the set S we make below  you can try making a loop to print the elements in the set one by one, as we did above, or to accumulate them, or do anything else you think is sensible ....

Loop over the set and do something.

You can even use a for loop on a string like "thisisastring", but this is not as useful for us as being able to use a for loop on a list or set.
We can use the range function we met last week to make a for loop that will do something a specified number of times. Remind yourself about the range function:

Now let's use the counter idea do a specified number of rolls of the die that we can simulate with randint(1,6). Notice that here the actual value of the elements in the list is not being used directly: the list is being used like a counter to make sure that we do something a specified number of times.


A conditional statement is also known as an if statement or ifelse statement. And it's basically as simple as that: if [some condition] is true then do [something]. To make it more powerful, we can also say else (ie, if not), then do [a different something].
The if statement syntax is a way of putting into a programming language a way of thinking that we use every day: E.g. "if it is raining, I'll take the bus to uni; else I'll walk".
You'll notice that when we have an if statement, what we do depends on whether the condition we specify is true or false (i.e. is conditional on this expression). This is why if statements are called conditional statements.
When we say "depends on whether the condition we specify is true or false", in SAGE terms we are talking about whether the expression we specify as our condition evaluates to the Boolean true or the Boolean false. Remember those Boolean values, true and false, that we talked about in Lab 1?
The SAGE syntax for a conditional statement including if and else clauses is explained below:
Start with the keyword if
Immediately after keyword if we specify the condition that determines what happens next. The condition therefore has to be the kind of truth statement we talked about in Lab 1, an expression which SAGE can evaluate as either Boolean true or Boolean false. The examples below will make this a bit clearer.
After we have specified the condition, we end the line with a colon : before continuing to the next line.
Now, immediately below the line that starts if, we give the code that says what happens if the condition we specified is true. This block of code is indented so that SAGE can recognise that it should do everything in that block if the condition is true. If the condition is false, SAGE will not go into this indented block to execute any of this code but instead look for the next line which is aligned with the original if, i.e. the next line which is not part of the ifblock.
In some situations we only want do something if the condition is true, and there is nothing else we want to do if the condition is false. In this situaion all we need is the ifblock of code described above, the part that is executed only if the condition is true. Then we can carry on with the rest of the program. An everyday example would be "if it is cold I will wear gloves".
In some other situations, however, we also want to specify something to happen when the condition is false  elseblock. We specify this by following the ifblock with the keyword else, aligned with the if so that SAGE knows this is where it jumps to when the if condition is false. The keyword else is also followed by a colon :
Immediately below the line that starts else, we say what happens if the condition we specified is false. Again, this block of code (the elseblock) is indented so that SAGE can recognise that it should do everything in that block if the condition is false.
Note that SAGE will either execute the code in the ifblock or will execute the code in the elseblock. What happens, i.e. which block gets executed, depends on whether the condition evaluates to true or false.
Let's set up a nice simple conditional statement and see what happens. We want take two variables, named x and y, and print something out only if x is greater than y. The condition is 'x > y', and you can see that this evaluates to either true or false, depending on the values of x and y.

We can nest conditional statements so that one whole conditional statement is in the ifblock (or elseblock) of another one.

You try
Example 4: Conditional statements
The cell above only did something if the condition was true. Now let's try if and else. This is a more complicated example and you might want to come back to it to make sure you understand what is going on. Try assigning different values to the variable myAge and see what happens when you evaluate the cell.

We could also define a function which uses if and else. Let us define a function called myMaxFunc which takes two variables x and y as parameters. Note how we indent once for the body of the function, and again when we want to indicate what is in the ifblock and elseblock.


Finally, lets look at something to bring together for loops and conditional statements. We have seen how we can use a for loop to simulate throwing a die a specified number of times. We could add a conditional statement to count how many times we get a certain number on the die. Try altering the values of resultOfInterest and numberOfThrows and see what happens. Note that being able to find and alter values in your code easily is part of the benefits of using variable names.

To get even fancier, we could set up a map and count the occurences of every number 1, 2, ... 6. Make sure that you understand what we are doing here. We are using a dictionary with (key, value) pairs to associate a count with each number that could come up on the die (number on die, count of number on die). Notice that we do not have to use the conditional statement (if ...) because we can access the value by using the key that is a particular result of a roll of the die.
Try altering the numberOfThrows and see what happens.

Earlier, we looked at the probability of the event $BA$ when we toss two dice and $A$ is the event that the sum of the two numbers on the dice is 6 and $B$ is the event that the first die is a 2. We worked out that $P(BA) = \frac{1}{5}$. See if you can do another version of the code we have above to simulate throwing two dice a specified number of times, and use two nested conditional statements to accumulate a count of how many times the sum of the two numbers is 6 and how many out of those times the first die is a 2. When the loop has finished, you could print out or disclose the proportion $\frac{\text{number of times sum is 6 and first die is 2}}{\text{number of times sum is 6}}$

Example 5: More coin toss experiments
If you have time, try the threecointoss and fourcointoss examples below. These continue from Example 2 above.
Another new experiment: the outcomes of three tosses of the coin.

The random variable we define is the number of heads we get in three tosses.


If you are not bored yet, try a fourtosses experiment where we define a random variable which takes value 1 when the outcome includes exactly two heads, and 0 otherwise.






