# Understanding Pearson’s r

In this article, we’re going to try to get a numerical and graphical intuition about Pearson’s r.

Karl Pearson developed the statistical measure now known as Pearson’s r, or the Pearson product-moment correlation coefficient, around the turn of the 20th century. Briefly stated, Pearson’s r measures the covariance of two variables in terms of their standard deviations. In other words, leaving aside the units and scale of the two variables, is growth in one variable consistently reflected in comparable increase or decrease in the other?

Let’s give an example. I’ll take advantage of this post to also do a bit of R and ggplot2 practice for those who want to follow along. I’ll create some “perfect” data, where y is completely predicted by x in a linear relationship. In my case, y increases a bit more than double with every increase in x. This will obviously result in a clearly linear relationship:

We can eyeball the slope to see that the line passing through these points has a slope of 2.3. But what is the relationship when we take into account that our two variables have different scales? We could replot this based on the z-score of each data point – how many standard deviations this point is from the mean of x and the mean of y:

Here we see that with every standard deviation increase in x, there is a corresponding standard deviation increase in y. The slope of this line is 1:

The slope is the same as our Pearson’s r:

Since we scaled by z-indices (number of standard deviations) in order to get the correlation, it follows that we can work backwards to get the slope of the line of the actual model (using the original, unscaled variables) by multiplying Pearson’s r by the standard deviation of y divided by the standard deviation of x (our rise / run scale):

What if we have a (more realistic) dataset that has some errors added to it? Try running the following and you’ll see that the slope of the linear model going through the scaled-by-z-index data is the same as the Pearson’s R:

Pearson’s r is therefore a measure of how perfectly linear or inversely linear (1, -1) the change in two variables are, when considered in terms of their standard deviations (this scaling means that r means the same thing regardless of the scale of your variables). The relationship between Pearson’s r and the slope of your line in the model of your two variables is fairly straightforward.

Hopefully this gives you a bit more geometric intuition of the importance of Pearson’s r and how it relates to linear modeling!