Lecture 4: Basic Probability and Summary Statistics

Probability and Statistics

What are these words? What is the purpose of each? What's the difference between the two?

Probability is a set of tools for describing a model of how the world (or some process in the world) behaves.

Statistics gives us a set of tools for estimating such a model, or for verifying a hypothesized model, given observations of how the world behaved.

Probability - Basics

First, need some terminology.

If we run an experiment where we toss a fair coin, the sample space contains the outcomes ${H, T}$ representing heads and tails. The coin is fair, so the probability of each outcome is 0.5, which satisfies both of the properties above.

Suppose you made a deal with a friend to toss a coin, and if it comes up heads, your friend gives you a dollar. If it comes up tails, no money changes hands. The random variable $V$ that's relevant to your wallet is $V(H) = 1, V(T) = 0$. The expected value of this random variable is $V(H) * P(H) + V(T) * P(T) = 0.5$, which you can think of as the amount of money you would expect to earn per flip, on average, if you repeated this experiment many, many times.

Exercise: describe the rolling of a six-sided die using the same terminology as above. For a random variable, use the number on the die itself; find the expected value.

Probability Distributions

The expected value is one important property of a random variable, but if we want the whole story, we need to look at its probability density function (PDF): a graph with random variable's values on the $x$ axis and the probability of such an outcome occurring on the $y$ axis.

Here's the PDF of the random variable described above:

Exercise: Draw the PDF for a loaded five-sided die that comes up 1 with probability 0.6 and has an equal chance of each the remaining four faces.

Statistics

We can think of a set of data as the outcome of one or more experiments; statistics give us tools to describe the properties of the data and, eventually, estimate how the underlying experiment behaves. For now, we'll talk about summary statistics, which provide aggregate descriptions of a data set. For now, let's assume we have a single numerical column of a table - say, the Height column of a dataset containing people.

Histograms

Histograms are the statistical equivalent of probability density functions. They show the observed frequency of a certain outcome. For example, here's the histogram describing the result of flipping a coin ten times.

The histogram is a direct analogue to the probability distribution function. In fact, we can convert it to an empirical PDF by dividing by the number of trials:

Notice that only the $y$ axis scale changed. This is an estimate of the PDF based on the data we observed.

Real-world experiments of interest are more complicated - more complicated processes, more complicated sets of outcomes, etc. It's often useful to summarize the salient properties of an observed distribution. For this, we use summary statistics.

Central Tendency Measures

These tell you something about where the data is "centered".

(Arithmetic) Mean: The sum of the values divided by the number of values: $$\mu_X = \frac{1}{n} \sum_{i=1}^n x_i$$.

This works well for data sets where there aren't many outliers; for example: the average height of a female American is 5 feet 4 inches.

Geometric Mean: The $n$th root of the product of $n$ values: $$\left(\prod_{i=1}^n a_i\right)^\frac{1}{n}$$

This is a weird one, and not as often applicable. If you have a single zero, the geometric mean is zero. But it's useful for measuring the central tendency of a collection of ratios.

Median: The middle value - the element appearing exaclty in the middle if the data were sorted. This is useful in the presence of outliers or more generally when the distribution is weirdly-shaped.

Variability Measures

These tell you something about the spread of the data, i.e., how far measurements tend to be from the center.

Standard Deviation ($\sigma$): The square root of the sum of squared differences between the elements and the mean: $$\sqrt{\frac{\sum_{i=1}^n (a_i - \bar{a})^2}{n-1}}$$

Variance: the square of the Standard Deviation (i.e., same thing without the square root).

Variance is easier to intuit: it's the average sqaured distance from the mean, with a small caveat that it's divided by n-1 rather than by n.