My First MathBootCamps Calculator Guide

Last time that I posted about MathBootCamps, I mentioned my linear algebra study guide about span and linear combinations and the concept of releasing a book a chapter at a time. This has evolved into releasing three different types of downloads on the site:

  • Study Guides – the first example of this is the linear algebra guide. Eventually there will be enough released for each topic to constitute a full review of a course.
  • Calculator Guides – these are focused on tasks associated with the TI83/84 calculator, which is still used quite a bit in high school and college settings. This would probably cover a few books worth of material since there could be full guides for different courses.
  • Problem Packs – the idea here is that for any given post on MathBootCamps, there can be an associated problem pack. These are basically workbooks focusing on a very specific idea.

Finally, I have the first calculator guide finished! The focus of this particular guide is statistical plots such as histograms, boxplots, etc. This seemed like the best place to start for what can eventually be a complete guide to statistics on the TI83/84. It took a bit to get the instructions so that they are easy to follow and to get the screenshots so that they look good and consistent. I also made sure to have another person run through, edit the writing, and try out the steps themselves to catch anything confusing. The audience is students taking stats or first-time stats teachers who are approaching the course using the TI83/84.

calc-guide

The biggest thing I tried to do with this guide was make it so that someone reading it could jump into any section they want without needing to read through all of the previous sections. This inspiration was taken from the “cookbook” style books that are out there for programming.

You can download at the guide here: https://gum.co/nPkOS

Releasing a book… one chapter at a time (linear algebra study guide)

This week marks the beginning of an interesting experiment. For a long time, one of the goals for MathBootCamps has been to offer review books (think: Schaum’s outline and similar) to students for a wide variety of topics. This is a huge undertaking; so big in fact, that it is hard to know where to even start. This has resulted in a lot of partially finished writing laying around in different folders.

Visitors to mathbootcamps.com, however, are probably not looking for an entire review book just yet. The majority of my traffic is from students who are trying to understand a specific topic and have found me through a google search.

With this in mind, I realized that I could release chapters, or even sections, of review books as they are finished. Each one would essentially be a study guide for that particular topic. These could then be made available to students visiting any related page on mathbootcamps.com. As a writer, this means I could jump from topic to topic (write one chapter for stats, write another for linear algebra – it’s more fun that way!) and adjust as I go based on feedback. Eventually, a group of study guides could then be offered together as a book, once a particular topic has been covered completely.

Yesterday, I made my first study guide available!



linear-combinations-and-span-study-guide-cover
Click to check it out (overlay – you won’t be taken to a new page)

A few things to note:

  • I went with “pay what you want” pricing. I want people to be able to access the information freely, but also be able to support MathBootCamps if they choose.
  • This would probably be chapter 3 or 4 of a linear algebra review book. But, with releasing chapters like this, there is no need to go in order. This would certainly be different were someone to try releasing a chapter at a time of a fiction text.
  • I’m using gumroad. There were several other options I reviewed and this looked the best for now. This choice and my experience will be part of a future post.
  • It was a ton of work to put this together. Even though the study guide is only 40 pages, I edited it at least three times and rewrote sections more than once. I had to come up with a style that would be used throughout and get around the tech stuff as well. This will go much faster with the next study guide.

There is still a lot to figure out. Gumroad gives me a sort of storefront I can send people to, and I need to work on making that look halfway decent. I will also likely make a special study guide page on MathBootCamps once more than one guide is available. Currently, I am working on the next study guide – statistical graphs/plots with the TI83/84 calculator, so that shouldn’t be too long from now!

A very nice pattern: squaring numbers like 99, 999, 9999, etc

Quick! What’s 92?

I’m assuming (hoping) you rolled your eyes or thought “81” (thanks for playing along). Since that was easy enough, let’s make it more interesting… what about 992? or 9992? or even 999992?

By the end of this post, you will be able to rattle off the square of any number like this just as quickly as you think of the answer from the good ole times table. You’ll be the life of every party! You will simply dazzle others with your math magic! Yes I’m done.

It turns out that squaring numbers where the digits are all 9 follows a very predictable pattern. While there are patterns when multiplying any number by a number made up of only 9s, this pattern is much easier to remember and in my opinion, much more interesting.

The pattern

Let’s just look at a few squared values to get an idea of what is going on.

92 81
992 9801
9992 998001
99992 99980001
999992 9999800001

Based on this small sample, it seems you can square a number consisting of n 9s by writing (n-1) nines, 8, (n-1) zeros, and then 1. You can spot check some larger values and see the pattern continues. For example: 9999999999= 99999999980000000001.

Applying this, is someone randomly asked you “hey – what’s 992?”, you would mentally note that there are 2 digits, and so the answer would have one 9, 8, one zero, and then 1: 9801.

Why does this work

As you know, simply showing a few examples is not a proof right? I’ve only shown that this pattern holds for these specific examples. A proof would show that it holds generally, without us having to type numbers into wolfram alpha to check for the rest of time. To understand the proof though, I will start by showing why it holds for one particular case: 99992.

Consider the definition of squaring. You know that 32 is just the product of 3 and 3. Naturally, the same is true here so:

9999^2 = 9999 \times 9999

 

But what is multiplication really? It is repeated addition. With something like  3 \times 5 , we are saying “add 3 to itself 5 times”. Using this, you could write 3 \times 5 as 3 + 3 + 3 + 3 + 3 or, if you wanted, 3 \times 4 + 3. Applying that here:

 9999 \times 9999 = 9999 \times 9998 + 9999

 

Now here is the “trick”. I’m going to add and subtract 9998. The way I will do that is by changing 9999 \times 9998 to 10\,000 \times 9998. This will mean I now have ten-thousand 9998s being added, which is too many, so I will subtract one of them from 9999.

9999 \times 9998 + 9999 = 10\,000 \times 9998 + 9999 - 9998

 

Simplifying this, we get:

9999^2 = 10\,000 \times 9998 + 1

 

And how do you multiply a value and 10,000? You write the value with four zeros following it. Here, you will then add 1. This will give us our pattern of three 9s, 8, three 0s, and 1.

Generalized

The same technique will work for any number made up of all 9s. We just have to keep track of the number of digits along the way. Following the steps we used above:

\begin{aligned}\underbrace{999 \cdots 9^2}_\text{n digits} &= 999 \cdots 9 \times 999 \cdots 9\\&= 999 \cdots 9 \times 999 \cdots 8 + 999 \cdots 9\\&= \underbrace{1000 \cdots 0}_\text{(n + 1) digits} \times 999 \cdots 8 + 999 \cdots 9 - 999 \cdots 8\\&= \underbrace{1000 \cdots 0}_\text{(n + 1) digits} \times 999 \cdots 8 + 1\\&= \underbrace{999 \cdots 8}_\text{n digits} \underbrace{000 \cdots 0}_\text{n digits} + 1\\&= \underbrace{999 \cdots 8}_\text{n digits} \underbrace{000 \cdots 1}_\text{n digits}\\&= \underbrace{999 \cdots 9}_\text{(n - 1) digits} 8 \underbrace{000 \cdots 0}_\text{(n - 1) digits} 1\\\end{aligned}

 

The number of digits is easy to keep track of since when we add 1, we know that we just went up to another digit (999 + 1 = 1000 which has one more digit than 999). I’m not sure this would be as simple when squaring similar types of numbers like 777, 888 etc.

Pin codes and the birthday problem

In an introductory statistics class, there are always tons of ways to link back up to the “real world” – its one of the great things about the topic! One of my favorite things to talk about in class is the concept of bank pin codes. How many are possible? Are they really randomly distributed? How many people need to get together before you can be sure two people have the same pin code? So many questions and all of them can be easily explored!

My movie idea: the pigeonhole principle

Our romantic lead is at a baseball game, rushing between innings to the ATM. On the way, he bumps into a young woman and they drop all the stuff they’re carrying. Apologetic, he helps her up while gathering both of their things and is off to get his cash. It’s only later that he realizes that he accidentally used her ATM card! Not only that – he has some strange charges on his bank statement too! They must have mixed up cards… but is it really possible they share the same code? If so… they are clearly meant to be together…

[Naturally the rest of the movie is her thinking that he stole the card and then eventually them falling in love. At some point, a random person will vaguely explain the pigeonhole principle and give the movie its name – no stealing my idea]

Could this really happen? Yes! In fact, as we will see later, you probably wouldn’t even need the pair to be at a baseball game to make it reasonably likely. Before we get to that though, let’s look at the basic math.

How many pin codes are possible?

We will work with pin codes are 4 digits long and can be any whole number 0 – 9. In general, the multiplication principle states that if you can break an action into steps, then the number of ways you can complete the action is found by multiplying the number of ways you can complete each step. For pin codes, we can think of selecting each digit as a step. There are 10 choices for each step so:

10 \times 10 \times 10 \times 10 = 10\,000

This shows there are 10,000 possible pin codes.

How many people do you need to get together to guarantee at least two share a pin code?

The pigeonhole principle is a really simple idea that can be used to prove all kinds of things, from the complex to the silly. This rule states that if you have n pigeonholes but n + 1 (or more) pigeons, then at least 2 pigeons are sharing a pigeonhole. Another way to think about it: if a classroom has 32 chairs, but the class has 33 students… well, someone is sitting on someone’s lap (and class just got weird).

For pin codes, this means that you only need 10,001 people together to guarantee that at least two share the same pin code. You can imagine that if you were handing out pin codes, you would run out at the 10,000th person. At that point, you would have to give them the same code as someone else.

There are definitely more than 10,000 people at a baseball game –So the movie idea works! I’m going to be a millionaire!  Ok back to math…

Simulations show you need way fewer than 10,000 people

Everything we have talked about so far is based on the idea that pin codes are randomly selected by people. That is, we are assuming that any pin code has the same chance of being used by a person as any other. For now, we will continue that assumption but as you can imagine, people definitely don’t behave this way.

The question

If we randomly assign pin codes, how many assignments (on average) will there be before there is a repeated code? We know by the pigeonhole principle that it is guaranteed after 10,000. But what is the typical number? For sure, through randomness, it often takes less than 10,000 right?

The simulation

For the sake of ease with writing the code, we will assign each pin code a whole number 1 – 10,000. So you can imagine the code 0000 is 1, the code 0001 is 2, and the code 9999 is 10,000. This way, assigning a pin code is really just assigning a random number from 1 to 10,000. In fact, we can now state the problem as:

“How many random selections from {1,2,…,10000} until there is a repeated value?”

Let’s look at the code!

pin_codes=1:10000          #this is the set 1 through 10,000
selected=rep(0,10000)      #placeholder for selected pins
count = 0                  
i = 1                      
repeat{                    #loop that keeps selecting pin codes
  pin = sample(pin_codes,1, replace=TRUE) #select the pin
  if (pin %in% selected){   #if already selected then stop
     break 
     }
selected[i]=pin             #put the selected pin into "selected"
count = count +1            #keep track of how many you selected
i = i+1}
count                       #type this to see how many were selected before a repeat

If you copy and paste this code into R, you might be surprised. My results the first three times were: 120, 227, 41.

This seems to suggest that through random selection, it only took assigning 120 pin codes before a repeat (in the first trial), 227 (in the second) and the in the last trial – it only took 41 times! This can’t be right?!

Loop it

Maybe through randomness, we just had some unusual trials. Running 500 or 1000 trials should show the overall trend. The code below is the same (almost) but I wrote a loop around it so that it would repeat the same experiment 500 times. If you try this on your computer note that it is a little slow (I didn’t consider efficiency at all when writing this).

pin_codes=1:10000
counts=rep(0,500)
for (i in 1:500){
  selected=rep(0,10000)
  count = 0
  j = 1
    repeat{
      pin = sample(pin_codes,1,replace=TRUE)
          if (pin %in% selected){
             break}
      selected[j]=pin
      count = count +1
      j = j+1}
counts[i]=count}

Type in “mean(counts)” and it will give us the mean number of times that pin codes were randomly assigned before a repeat. The result?

mean(counts)
[1] 124.486

This tells us that on average, it only takes about 124 assignments before you see a repeat [1]. This is waaaay less than 10,000. What is going on?!

The birthday problem

A famous probability question is known as “the birthday problem“. Suppose that birthdays are equally likely to occur on any given day of the year (and include leap years). This means that there are 366 possible birthdays. By the pigeonhole principle, that means you are guaranteed to have two people share the same birthday as soon as you get 367 people together. But, the probability is almost 100% at only 70 people (and 50% at 23 people). This is another very counter-intuitive result.

Pin codes and birthdays?

This pin code problem is really the same as the birthday problem, just a little bit bigger or more generalized. Just imagine that there are 10,000 possible birthdays – we are looking at the number of people you need to have two people with the same birthday under this much larger set. Realizing this and researching a bit, you find that this is something that has been studied and in fact, the expected number of selections needed would be:

1 + \displaystyle\sum\limits_{k=1}^{10,000}\,\dfrac{10,000!}{(10,000-k)!10,000^k}

Plugging that mess into wolfram alpha gives the result:

1 + \displaystyle\sum\limits_{k=1}^{10,000}\,\dfrac{10,000!}{(10,000-k)!10,000^k} \approx 1 + 124.932 = 125.932

Look at that – even over just 500 trials, our simulation was really close. It only takes about 125 to 126 selection (on average) before you see a repeated pin code (assuming they are randomly selected).

126 – thats it!

But pin codes aren’t random

This is more true than you realize. If you look at the datagenetics article, you will see that instead of making up 1 of 10,000 (or 0.01%), the pin 1234 appears to actually make up more than 10% of codes. Just for fun, I went ahead and changed the code to account for the probability of the top 20. I then assigned all the remaining pin codes the remaining probability evenly – though this isn’t perfect as uncommon pin codes are REALLY uncommon.

pr=rep((1-0.2683)/9980,9980)
pin_codes=1:10000
counts=rep(0,500)
for (i in 1:500){
selected=rep(0,10000)
count = 0
j = 1
repeat{
pin = sample(pin_codes,1,replace=TRUE, prob = c(0.10713,0.06016,0.01881,0.01197,0.00745,0.00616,0.00613,0.00526,0.00516,0.00512,0.00451,0.00419,0.00395,0.00391,0.00366,0.00304,0.00303,0.00293,0.00290,0.00285,pr))
if (pin %in% selected){
break}
selected[j]=pin
count = count +1
j = j+1}
counts[i]=count}

This runs a loop of selecting pins until there is a repeat and then repeats that process 500 times (using these new probabilities for the first 20 codes). Checking the mean after one run I have:

mean(counts)
[1] 12.8

Even crazier! Considering how pin codes are not truly random at all, it looks like you would really only need around 12 to 13 people to have a repeat. Remember – there are 10,000 possibilities in general! ([2] – comment below on small correction made here)

Summary

Here are the numbers all together:

  • By the pigeonhole principle – you are guaranteed that two people share a pin code if the group is larger than 10,000. BUT:
    • If pin codes are randomly distributed: ~126 people (on average) are needed before two share a pin code
    • Using just a little of the data on the true distribution: Only ~13 people [2] (again, on average) are needed before two share a pin code!

Including more of the data we have on the distribution would probably bring that number down even further. As you can see, it is always very interesting to compare the theory to the reality.

Notes

[1] my code counts how many were selected and then stops counting when a repeat is encountered. So, it is really off by 1 from the expected number that you would select to have a repeat. This is a minor technicality overall but worth noting when you see the expected value formula which adds 1 to the sum.

[2] My original code had one of the probabilities as 0.0516 instead of 0.00516 and the mean after several runs was generally around 12. Fixing this probability, the mean seems to be a bit closer to 13 with several runs resulting in means of 12.5 to 12.8. It seems the top codes are really dominating the selection. It would be interesting to code in the details about the less likely pin codes (since they have a very tiny probability of being selected) and seeing if this is actually lower or not.

 

 

 

 

 

Best thing I ever did for my site – full page by page review

Did you know that mathbootcamps.com has been online since 2010? Wow! Some of the articles are now over 5 years old. No big deal right? Math hasn’t exactly changed in those 5 years…

The spirit of just do something has always been important to me. When I started the website, I didn’t really have any experience writing about math in that way. But, so what? – you gotta start somewhere and I could have waited and waited for some magical inspiration or simply started trying to write.  I chose the latter.

In those 5 years, I went from just writing a bit on the website to working on all kinds of math writing projects for tons of different clients. I’ve had experience writing test prep material, textbooks, guides, and online lessons. I’ve even worked as an editor on a wide variety of projects. All of this has totally changed how I write and over time the articles on mathbootcamps reflect it. The newest ones have a more cohesive visual style and much clearer writing style. There is plenty of room for improvement but the changes are noticeable.

If you have been writing on a blog or website for a while now, I bet your writing has improved quite a bit too! If you have a lot of content that you think people will come back to over time, then I definitely recommend going through this review process like I did.

Bring on the review!

The first step to my review was to decide the post that I wanted to use as an “exemplar” standard in style and writing. I ended up choosing two:

http://www.mathbootcamps.com/how-to-make-a-stemplot/

http://www.mathbootcamps.com/how-to-read-a-boxplot/

These show exactly what I would like to see in most posts. They’re organized, they have an example that’s easy to follow, and the style with headings etc is nice.

After this, I set up a spreadsheet with the following headings:

  • Category (my site is divided into several math categories by a menu up top)
  • Post title
  • Video (I wanted a yes or no as my plan is for each post to have a corresponding video)
  • Contains at least one example (yes or no)
  • Meets exemplar style (yes or no)
  • Meets exemplar standard (yes or no)
  • Comments (especially if there was some issue – I ended up having some typos I didn’t know about)

I ended up hiring someone else to go through and fill out the spreadsheet figuring that a different set of eyes would be really helpful. You could probably do this yourself but remember to be brutal in your evaluation!

The results

So…much…work to do.

site-review

As I guessed, the older posts are just not meeting standards. Little did I realize that a while back I also had made posts which were just a video or short text – this is not the direction I want to take mathbootcamps now and so these need adjustment as well.

Verdict

This was really worth it. Working alone, I’ve been so focused on getting new content up (whenever I have time) that it was easy to not even think about how the old content might look. But, if I want to have a useful math website, ALL the content needs to be good.

Anybody with a website that has been around a while would do well to go through this same process. Maybe the changes over time were subtle but I’m sure another set of eyes will catch where some content just isn’t up to standard anymore. Visitors running across these pages or articles will be happy for the change!

 

Two apps I use everyday as a math professor

For a long time I was a smartphone skeptic – happy with my dumbphone all the way up to 2010 and of course very judgmental of anyone I deemed to be “wasting time” with such a silly device. Well, here we are in 2015 and I have had my smartphone for more than a year and just like everyone said, it has become an indispensable tool.

Naturally I have a lot of cool apps right? (like neko atsume!) Sure – but there are a couple that actually help me day to day with my job.

#1 Wabbitemu

Wabbitemu will turn your phone into a TI83, TI84, or even the fancy pants TI84+. Writing an exam key and too lazy to go find your graphing calculator? Well this has you covered. I went old school and set it up as an 83. This is a screenshot straight off my phone (it sets it up so the phone menus only come up when you click).

wabbit emu screenshot android

 

Beautiful!

#2 Camscanner

Our online students take all of their exams on campus at our testing center. In the past, when I graded exams, the online students wouldn’t have the same experience of seeing all the feedback since they wouldn’t necessarily see the whole exam unless they picked it up. Camscanner has changed all of this. With a few clicks, you can create a really nice PDF using any picture or pictures (each picture can be set to be its own page).

2015-09-15 22.05.53

 

So take a document, snap a pic of each page, and boom – nice PDF handout.

The screenshot above is from a time where I wrote the entire key to a practice exam on the board but hadn’t yet typed it up. Realizing that about 50% of my students were just taking pictures of the key instead of writing it down, I decided to test if the app would make a good PDF from something like this and it did. Really impressive (and now I don’t have to type up the key).

Honorable Mention: Panecal Plus

I can’t explain it, but when I want to just do a quick calculation, I tend to want to use a simpler calculator. That said, the default calculator on android is just TOO simple. Here is where Panecal Plus comes in. This is an awesome little scientific calculator.

panecal plus screenshot andoid

 

There is a free version supported by ads, but I like to pay for an app when it’s good so that I can help support developers more directly.

The “I don’t know why I have this but it is still cool” app: R Console Premium

One other honorable mention goes to R Console for Android. You ever just want to code a quick simulation but you aren’t near a computer and don’t care if you can save your code. Yeaaah. Maybe I haven’t either but it doesn’t mean this app isn’t cool.

R Console android

 

I swear that I really have used this a few times! How often was it NOT because it just looks cool? Well, I don’t know the answer to that lol.

 

And here you thought this post was going to talk about wolfram alpha right?

 

 

 

Figuring out where to put videos on mathbootcamps pages

One thing I am personally working on is sharing more of the “behind the scenes” stuff that happens when you are running a small website like mathbootcamps – along with the challenges/interesting things you have to deal with. With that said, the biggest thing I am working on right now is what I like to call “The Video Situation.”

Mathbootcamps.com and the mathbootcamps youtube account have always been kind of separate entities. I might write an article on mathbootcamps and add a video or just leave the article as-is. I might make a quick video of a math topic for youtube but not have a corresponding article for it. This is all because they are both just fun things to do, so I have just been writing or recording as I feel like it and going with whatever topic seemed interesting at the moment. While this has been fun, it isn’t how you get a website really going and I have realized what I want is mathbootcamps.com to really get going!!

So considering how silly the disconnect between the two pieces is, I started at least matching up videos and posts on the site. For example, below you can see a post I did about making histograms using graphing calculators. The video is over on the right with the idea being that anyone who didn’t want to read a long post could just watch the video instead.

mathbootcamps-old-video-post

This has been working fine until recently, when google started knocking sites down in search results if they didn’t have a website that adjusts to mobile devices (this is called responsive web design). To avoid the penalty, I set up a mobile theme for the site, and while it still needs some work (it isn’t that pretty and isn’t really branded like the rest of the site), it does the job and keeps my site from being penalized.

2015-09-15 21.28.43

 

Maybe you notice the big problem?

No video! Even worse, the example in the post refers to the video.

My mobile theme just doesn’t show the videos on the right hand side of the posts. Even if it did, they would probably be really tiny and useless.

This means that it is time to truly get organized with the videos and the pages. As I am filling out the topics with new articles (you will notice the stats page is coming along nicely!), I can start incorporating the videos INTO the posts. This way, those visiting me on mobile will have the same content as everyone else and hopefully find what they need to help them with whatever math they are working on.

This also means going through a lot of old posts to figure out which posts already have a video and where exactly to place it. A big project, but Im thinking it will also help me find any posts that no longer cut-it style wise. A big project for sure, but one that will make the site better overall.

 

 

Generate a data set with a given correlation coefficient

Recently, I found myself needing to create scatterplots that represented specific values for the correlation coefficient r. This was for a writing project, but it is something that has come up with teaching as well. Showing students scatterplots for many different values of r seems to really help them conceptually, especially when it comes to understanding that not every data set with the same correlation will look exactly the same. Unfortunately,  I have always been at the mercy of what examples I can find online or in textbooks. With this in mind, I set out to figure this problem out once and for all.

The problem: Given a desired correlation coefficient, generate a data set.

As it turns out, this is not that difficult of a problem! Using this overall solution, I wrote a simple function in R.

make_data_corr = function(corr, n){
x = rnorm(n,0,1)
y = rnorm(n,0,1)
a = corr/(1-corr^2)^0.5
z=a*x+y
the_data = data.frame(x,z)
return(the_data)
}

The inputs here are corr (the desired value for the correlation coefficient) and n (the desired number of paired data values). You will notice that I didn’t add any kind of validation or anything like that to this function, so if you put in a strange value for r or n, you are on your own. The resulting output is a data frame with your data set being x and z. Here is an example of it in action:

example=make_data_corr(0.85,35)
plot(example$x,example$z)

scatterplot-RAt smaller sample sizes, the correlation coefficient is CLOSE but not exact. Here, r = 0.92 but when I ran the function again with n = 350 I ended up with r = 0.83. For my purposes this is good enough, but it is a consideration for possible improvements (at this stage, I haven’t thought about how to approach this).

Eventually I may make this into a small webapp that anyone can use (including myself). Until then, if you find a use for this or find a way to make this better, certainly let me know. It is an interesting little problem to play with!

Coming up with realistic data for linear regression examples

Whether it is for writing or for teaching, I am always in need of new and useful data sets. Since the GAISE report was released, everyone who teaches stats in any form is hearing over and over the importance of using REAL LIFE DATA and I agree that this is a good general practice. But sometimes, you need a data set that illustrates a specific idea or students need a simpler context to start with before tackling the (often) more complicated real life applications. I love going through a long real life application in class, but when it comes time for a quiz or a test, I just need to know that students can apply the basic techniques and explain concepts as they apply to the situation.

Let’s say that I am writing a new exam item and need some simple linear regression data set. Students are going to use their TI83 or 84 to get the correlation coefficient, the coefficient of determination, the equation for the line, and finally interpret these values (and things like the slope or y-intercept) in context.

I know I am not alone in this, so I will show you how we can get a reasonable, but not exactly real, data set for them to work with. My favorite tool for this is R (you don’t need to be an expert programmer for this!) but I figure other similar tools will work just as well.

First the context: Suppose a company thinks there may be a linear relationship between the amount they spend on advertising each month (in thousands of dollars) and the total monthly sales (also in thousands of dollars). (**insert instructions to student about performing regression etc**)

In an exam question, I wouldn’t want too many data values as this increases the chance of calculator typos (so tough to grade! is it really a typo? did they know what they were doing?). So, I will come up with 8 realistic looking advertising amounts. It’s too easy to accidentally have a pattern in data I think of the top of my head, so instead, I will use the rnorm function in R. This function let’s me generate random values from a normal distribution.

r-norm-screenshot

This function works with the following inputs:

rnorm(how many data values, mean, standard deviation)

To make this work, I did need to decide on a reasonable mean amount of money spent on advertising each month and a reasonable standard deviation. As you can see, I told it to give me 8 random normal values from a distribution with a mean of 10.6 and a standard deviation of 3.7. But, oops, I should probably round these. Since I will be using this data later, I will do the rounding in R.

r-norm-screenshot-with-rounding

There we go! Much better. Above, I used the round() function. I put in what I wanted to round as the first entry and how many decimals I wanted as the second entry. The next step is to get some good y-values (total sales) while keeping the linear relationship I would like. This requires another judgement call. I must decide what equation I should base these values on. I don’t know if there is one right answer here, but I often will do some googling to make it as realistic as possible.

For the sake of this example, I will just pick one here and say: y = 1.3x + 2.7. (where y is the sales for the month and x is the advertising spend; both in thousands). Since an exact fit will be very boring, I will add in some random error when calculating the sales values. For this one, I will go a little high with it by using a standard deviation of 3 (the mean should be zero).

for-loop-for-regression

For non-programmers, I will explain this code a bit.

The first line:

sales = c(0,0,0,0,0,0,0,0)

is where I initialize the variable sales. This code sets up sales as a set of 8 zeros. Each zero will then be replaced by the values I calculate in the for loop shown below. (sidenote: technically, in R,  y is a vector and not simply “a variable”, but this distinction isn’t important here)

for (i in 1:8){
sales[i] = 1.3*advertising[i] + 2.7 + rnorm(1,0,3)}

With the for loop, I am telling R to take each entry of “advertising” (in R we start at an index of 1) and calculate a “sale” value using the equation I came up with along with a little error (adding the random normal value from rnorm). In the last line, you can see my resulting data.

>sales
[1] 15.32 20.62 22.56 10.74 6.85 17.48 18.13 12.18

From here, it is worth looking to see how it all comes out when students work with this on their calculators. As an exam question, it should be pretty routine – a decent fit and not too crazy looking on the scatterplot.

linear-regression-ti84-screenshot scatterplot-ti84

Pretty good. Notice that my intercept is a little different than planned due to the error I added, but that is to be expected. Here is the final product:

Advertising Spend
(thousands of dollars)
12.29 15.11 14.44 10.17 3.56 11.45 11.10 8.18
Total Sales
(thousands of dollars)
15.32 20.62 22.56 10.74 6.85 17.48 18.13 12.18

Add in a story (what does the company make? what’s the company’s name? what is their motivation?) and you have a nice simple exam problem. This is also a great problem to talk about AFTER the exam. Can we expect that sales are always linearly related to spend? So if I spend more I will always sell more? These questions about extrapolation and the true application of a linear regression model are important in any statistics classroom and applying these techniques in real life.

Understanding the common core: HSS.IC.B.6 use simulations to decide if differences between parameters are significant

Although I am a college professor, a great deal of my freelance writing involves working with the common core state standards. Most of this time, especially at the beginning, was spent trying to decipher exactly what skills the common core is after and how to best assess or address those skills.

A particularly tough to interpret group of standards are in the domain “making inferences and justifying conclusions“. These standards are focused on helping students develop that deep intuition with statistics based thinking. For example, a question like “a coin landed on tails 65 times out of 100 – is this enough to make us question if it is fair?” would be a part of this domain. All these standards require some really deep thinking on the part of students.

HSS.IC.B.6

This standard states that students should be able to:

Use data from a randomized experiment to compare two treatments; use simulations to decide if differences between parameters are significant.

Many online resources out there are interpreting this as meaning that students should be able to use tools such as a 2-sample-t-test to compare two populations. Personally, I think this completely missed the mark of this entire domain of standards. At this level, it isn’t that we are expecting high school students to apply hypothesis testing or confidence interval calculations formally. Instead, we want them to start thinking about the meaning behind these procedures before they see them formally presented at the college level or in an AP stats course. These types of ideas will help the students have a much better idea of the p-value and the whole process of hypothesis testing itself, once these are introduced.

 An Example

Let’s use a typical question that would be aligned to this standard as a discussion tool. The data for this question and the resulting histogram were all generated in R (see the bottom of the post for code).

Suppose that two researchers want to determine if high school students that are offered encouraging remarks complete a difficult task faster, on average, than those who aren’t.  In order to test this, they select two random samples of 25 high school students each. The first group is asked to work on a difficult puzzle and offered no feedback as they work. The second group is asked to do the same but are also given encouraging comments such as “you almost got it” or “that’s a good idea” as they work. For the first group (no encouragement), the mean time to complete the puzzle was 28.1 minutes with a standard deviation of 6.7 minutes. For the second group, the mean time was 27.2 minutes with a standard deviation of 5.5 minutes.

In order to test the significance of this result, the researchers used a computer to randomly assign individual times to each group and then compute the new mean difference between the first and second groups. They then repeated this process 1,000 times and plotted all of the resulting differences on the plot below.

HSS.IC.B.5-fixed

The question here might then ask students to determine if the observed difference between the means is statistically significant, or explain whether or not this should lead researchers to believe that those with encouragement will complete the task faster. Both deep/critical thinking types of questions that go beyond applying a formula.

Using the graph, we would hope that they would see that the observed difference of 28.1 – 27.2 = 0.9 minutes is within a range of values that is frequently observed when the groups are assigned randomly (it is not a rare difference – it came up a lot in simulation). Therefore, the experiment’s results are not statistically significant as they could be due to chance alone. Through resampling, they are able to see how the samples might behave if the differences WERE due to chance (as they were in the simulation).

As you can see, this type of question is indirectly having students think about a p-value and its implications without truly introducing these ideas formally. Certainly they could run a 2-sample-t-test or similar, but that would be robotic compared the critical thinking that the common core writers were hoping students would develop. The ultimate goal is to have students use computers or even physical simulation to understand uncertainty (such as using a special deck of cards, or even flipping coins) and as mentioned develop an intuition towards statistical thinking in general.

If you are finding yourself still trying to wrap your mind around this standard, you might find the following related articles interesting: Why Resampling is Better than Hypothesis Tests and Confidence Intervals which comments on a similar high school standard in New Zealand and Resampling Statistics which is an overview of techniques from East Carolina University.

 R Code Used for This Example

#create the data for the two groups 
#sample means and standard deviations 
#were calculated from these groups
 no_encourage=rnorm(25,28.6,7.1)
 encourage=rnorm(25,27.1,6.4)
#create the combined group
group=c(encourage, no_encourage)
#initialize difference vector
diff=1:1000
>#resample
for(i in 1:1000){
 randomized=sample(group)
 new_no_encourage=randomized[1:25]
 new_encourage=randomized[26:50]
 diff[i]=mean(new_no_encourage)-mean(new_encourage)
 }