For the first few years teaching AP Statistics, our goal for the end of the course was simple:
“If the P-value is low, the null must go!”
With this as the goal, students cruise through their calculations (or punch buttons on the calculator) without thinking too much about the mechanics. All they need to know is the number for that P-value and they are ready to write a scripted conclusion.
This approach seemed to work reasonably well, but students were still struggling with many concepts related to significance tests, including:
Lack of intuition about whether or not sample data will be statistically significant
Forgetting that a significance test is performed by assuming the null hypothesis is true
Difficulty in connecting their decision (reject the null or fail to reject the null) to a conclusion in the context of the problem
Struggling to understand how to improve the power of a significance test or to shift probabilities for Type I and Type II errors.
After a few years of using this memorize-a-catch-phrase approach, we realized that students didn’t really understand the P-value (like most of the population of adults!). We decided to set an intentional goal for students to be able to interpret the P-value (the holy grail of AP Statistics).
Here are 3 specific strategies to make this happen:
1. Use simulation to develop the concept informally
Don’t wait until the significance test chapter in the book to start developing the conceptual understanding of a P-value. There are plenty of opportunities in our curriculum to sneak the P-value into an activity. Here are a few:
“Assuming that Joy can’t smell Parkinsons’ disease, there is a 0/25 = 0 probability she would correctly guess 11 or more correct out of 12, purely by chance.”
“Assuming the soda contest is fair (1 in 6 are winners), there is a 5/51 = 0.098 probability that a class would get 2 or less winners out of 30, purely by chance."
“Assuming that Mrs. Gallas is an 80% free throw shooter, there is a 0/27 = 0 probability that she would make 32 or less free throws out of 50, purely by chance."
It is not necessary for us to call it a P-value in any of these activities, but we do want students to be able to interpret the value in the context of the problem. Here are some tips that will help you to write a good interpretation:
The dotplots are created by assuming some claim is true (Joy can’t smell Parkinson’s, the soda contest was fair, Mrs. Gallas is an 80% free throw shooter).
We are looking for the probability of getting the observed result (Joy’s 11 out of 12 shirts, only 2 winners out of 30 in the soda contest, only 32/50 free throws made) or more extreme, purely by chance.
We say “purely by chance” because our simulations were constructed utilizing a random process (guessing on shirts, rolling a die, spinning a spinner.
2. Build a strong foundation with sampling distributions
Each of the simulations above created a dotplot that represents an approximation of a sampling distribution. We help students to understand this concept by asking them “What does this dot represent?”. At the very core essence of the sampling distribution is the idea that every random sample will produce slightly different results (variability).
Later in the course, we take these dotplots and approximate them with normal distributions by checking the Large Counts or the Large Sample/Normal conditions. Then the calculation of the P-value moves from counting dots to finding area under a normal distribution.
When we get to teaching significance tests formally, we always require students to draw a picture of the sampling distribution, clearly labelling the mean of the distribution, the sample statistic, and shading the region which represents the P-value.
3. Make students interpret the P-value (every time)
When our students are writing a conclusion for a significance test, we always have them start with the interpretation of the P-value. The second sentence is the ubiquitous comparison of the P-value to alpha, followed by the decision (reject the null or fail to reject the null) and a conclusion about the alternative hypothesis in the context of the problem.
Note: The interpretation of the P-value is NOT required on the AP Exam for full credit on a significance test. There is a risk if students provide this interpretation on the AP Exam, as they can lose points if the interpretation is incorrect.
We think interpreting the P-value is entirely worth the risk, for the additional understanding that this practice provides for students will undoubtably help them in other places on the AP Exam (and in life!).