Reminder: talks tomorrow and Friday - now with titles!
Each lecture's notebook will begin with announcements, then Goals. These are my attempt to be clear about what I want you to take away from a given lecture. Together, the Goals for all lectures form a complete study guide.
Develop intuition for the purpose of, and distinction between, probability and statistics (Skiena 2.1.1)
Know the terminology and properties of basic probability (Skiena 2.1):
Know how to compute and interpret basic summary statistics (Skiena 2.2):
Know how to compute summary statistics in pandas.
What are these words? What is the purpose of each? What's the difference between the two?
Probability is a set of tools for describing a model of how the world (or some process in the world) behaves.
Statistics gives us a set of tools for estimating such a model, or for verifying or evaluating a hypothesized model, given observations of how the world behaved.
First, need some terminology.
An experiment is a process that results in one of a set of possible outcomes.
The sample space ($S$) of the experiment is the set of all possible outcomes.
An event ($E$) is a subset of the outcomes.
The probability of an outcome $s$ is written $P(s)$ and has these properties:
A random variable $(V)$ is a function that maps an outcome to a number.
The expected value $E(V)$ of a random variable $V$ is the sum of the probability of each outcome times the random variable's value at that outcome: $$E(V) = \sum_{s \in S} P(s) \cdot V(s)$$
If we run an experiment where we toss a fair coin, the sample space contains the outcomes ${H, T}$ representing heads and tails. The coin is fair, so the probability of each outcome is 0.5, which satisfies both of the properties above.
Suppose you made a deal with a friend to toss a coin, and if it comes up heads, your friend gives you a dollar. If it comes up tails, no money changes hands. The random variable $V$ that's relevant to your wallet is $V(H) = 1, V(T) = 0$. The expected value of this random variable is $V(H) * P(H) + V(T) * P(T) = 0.5$, which you can think of as the amount of money you would expect to earn per flip, on average, if you repeated this experiment many, many times.
Exercise: describe the rolling of a six-sided die using the same terminology as above. For a random variable, use the number on the die itself; find the expected value.
Exercise for later: Do the same as above for a roll of two six-sided dice, and calculated the expected value of the random variable that is the sum of the numbers that the two dice land on.
The expected value is one important property of a random variable, but if we want the whole story, we need to look at its probability density function (PDF): a graph with random variable's values on the $x$ axis and the probability of the random variable taking on that value on the $y$ axis.
Here's the PDF of the random variable described above:
import matplotlib.pyplot as plt
plt.bar(["0", "1"], [0.5, 0.5])
plt.xlabel("V(s)")
plt.ylabel("P(s)")
Text(0, 0.5, 'P(s)')
Exercise: Draw the PDF for a loaded five-sided die that comes up 1 with probability 0.6 and has an equal chance of each the remaining four faces.
We can think of a set of data as the outcome of one or more experiments; statistics give us tools to describe the properties of the data and, eventually, estimate how the underlying experiment behaves. For now, we'll talk about summary statistics, which provide aggregate descriptions of a data set. For now, let's assume we have a single numerical column of a table - say, the Height column of a dataset containing people.
Histograms are the statistical equivalent of probability density functions. They show the observed frequency of a certain outcome. For example, here's the histogram describing the result of flipping a coin ten times.
import random
N = 10000
outcomes = []
for i in range(N):
outcomes.append(random.choice(("H", "T")))
#print(outcomes)
n_heads = 0
n_tails = 0
for out in outcomes:
if out == "H":
n_heads += 1
if out == "T":
n_tails += 1
plt.bar(["0", "1"], [n_tails, n_heads])
<BarContainer object of 2 artists>
The histogram is a direct analogue to the probability distribution function. In fact, we can convert it to an empirical PDF by dividing by the number of trials:
plt.bar(["0", "1"], [n_heads / N, n_tails / N])
<BarContainer object of 2 artists>
Notice that only the $y$ axis scale changed. This is an estimate of the PDF based on the data we observed.
Real-world experiments of interest are more complicated - more complicated processes, more complicated sets of outcomes, etc. It's often useful to summarize the salient properties of an observed distribution. For this, we use summary statistics.
These tell you something about where the data is "centered".
(Arithmetic) Mean, aka "average": The sum of the values divided by the number of values: $$\mu_X = \frac{1}{n} \sum_{i=1}^n x_i$$.
This works well for data sets where there aren't many outliers; for example: the average height of a female American is 5 feet 4 inches.
Geometric Mean: The $n$th root of the product of $n$ values: $$\left(\prod_{i=1}^n a_i\right)^\frac{1}{n}$$
This is a weird one, and not as often applicable. If you have a single zero, the geometric mean is zero. But it's useful for measuring the central tendency of a collection of ratios.
Median: The middle value - the element appearing exaclty in the middle if the data were sorted. This is useful in the presence of outliers or more generally when the distribution is weirdly-shaped.
***-iles**
These generalize the median to fractions other than one half. For example, the five quartiles of a dataset are the minimum, the value that is larger than one quarter of the data, the median, the value that is larger than three quarters of the data, and the maximum.
Common examples aside from quartiles include percentiles (divide the data into 100ths), deciles (10ths), and quintiles (fifths).
These tell you something about the spread of the data, i.e., how far measurements tend to be from the center.
Standard Deviation ($\sigma$): The square root of the sum of squared differences between the elements and the mean: $$\sqrt{\frac{\sum_{i=1}^n (a_i - \bar{a})^2}{n-1}}$$
Variance: the square of the Standard Deviation (i.e., same thing without the square root).
Variance is easier to intuit: it's the average sqaured distance from the mean, with a small caveat that it's divided by n-1 rather than by n.
There are built-in functions that do all of the above for us. To demo, we'll use a juicy dataset of Washington State employee salaries.
import pandas as pd
Load the data:
url = 'https://facultyweb.cs.wwu.edu/~wehrwes/courses/data311_23w/data/AnnualEmployeeSalary2.csv'
df = pd.read_csv(url,thousands=',')
A good way to get some basic information about the dataset:
df.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 485791 entries, 0 to 485790 Data columns (total 9 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Agy 485791 non-null int64 1 AgyTitle 485791 non-null object 2 Employee Name 485791 non-null object 3 Job Title 485791 non-null object 4 Sal2016 485791 non-null int64 5 Sal2017 485791 non-null int64 6 Sal2018 485791 non-null int64 7 Sal2019 485791 non-null int64 8 Sal2020 485791 non-null int64 dtypes: int64(6), object(3) memory usage: 33.4+ MB
"Agy" stands for "Agency". Let's limit our analysis to Western employees.
wwu_df = df[df['AgyTitle'] == "Western Washington University"]
wwu_df
Agy | AgyTitle | Employee Name | Job Title | Sal2016 | Sal2017 | Sal2018 | Sal2019 | Sal2020 | |
---|---|---|---|---|---|---|---|---|---|
339188 | 380 | Western Washington University | KHANGAONKAR, TARANG | INSTRUCTOR - 02204 | 0 | 0 | 0 | 7200 | 0 |
339189 | 380 | Western Washington University | MAROTTO, TRIPP | CLASSIFIED STAFF TEMP HOURLY - 19993 | 0 | 0 | 0 | 400 | 0 |
339190 | 380 | Western Washington University | GILMAN, JEREMY | EXEMPT PROF HOURLY TEMPORARY - 19913 | 0 | 0 | 0 | 600 | 0 |
339191 | 380 | Western Washington University | MACKAY, ANGUS | INSTRUCTOR - 02205 | 0 | 0 | 0 | 4700 | 0 |
339192 | 380 | Western Washington University | PATTERSON, LYNN | INSTRUCTOR - 02205 | 0 | 500 | 0 | 0 | 0 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
348086 | 380 | Western Washington University | DANA, JACOB | GRANT AND CONTRACT SPECIALIST - 143E | 0 | 0 | 0 | 0 | 7200 |
348087 | 380 | Western Washington University | TJOELKER, DEANNA | CLASSIFIED STAFF TEMP HOURLY - 19993 | 0 | 0 | 0 | 0 | 3100 |
348088 | 380 | Western Washington University | KIRMANI, AMMAR | INSTRUCTOR - 00803 | 0 | 0 | 0 | 0 | 6300 |
348089 | 380 | Western Washington University | RICKENBACKER, SAMUEL | CLASSIFIED STAFF TEMP HOURLY - 19993 | 0 | 0 | 0 | 0 | 3300 |
348090 | 380 | Western Washington University | CHIANG, DESDEMONA | EXEMPT PROF HOURLY TEMPORARY - 19913 | 0 | 0 | 0 | 0 | 1000 |
8903 rows × 9 columns
Now let's compute some stats!
wwu_df.describe()
Agy | Sal2016 | Sal2017 | Sal2018 | Sal2019 | Sal2020 | |
---|---|---|---|---|---|---|
count | 8903.0 | 8903.000000 | 8903.000000 | 8903.000000 | 8903.000000 | 8903.000000 |
mean | 380.0 | 15224.025609 | 16043.693137 | 16620.745816 | 17846.838144 | 18291.587105 |
std | 0.0 | 27603.226230 | 29508.987971 | 30562.116177 | 32318.496527 | 33670.267126 |
min | 380.0 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
25% | 380.0 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
50% | 380.0 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
75% | 380.0 | 17750.000000 | 17650.000000 | 19100.000000 | 20550.000000 | 21300.000000 |
max | 380.0 | 276600.000000 | 366000.000000 | 374300.000000 | 390200.000000 | 400100.000000 |
Looking at a single column and compute some finer-grained statistics:
wwu_df['Sal2020'].mean()
sal = wwu_df['Sal2020'] # save me from having to type wwu_df['Sal2020']
sal[sal>0].mean()
sal[sal>0].median()
sal[sal>0].mode()
wwu_df[wwu_df['Sal2020']==500]
sal.max()
sal.argmax()
wwu_df.iloc[6920]
#sal[sal>0].min()
#sal[sal>0].var()
#sal[sal>0].std()
wwu_df.sort_values('Sal2020',ascending=False).iloc[0:20]
wwu_df['Sal2020'].sort_values(ascending=False).plot.hist()
<matplotlib.axes._subplots.AxesSubplot at 0x7ff0a6c3ffa0>
df[df["Sal2020"] > 0].groupby("AgyTitle").mean().tail(12)