CS-109A Introduction to Data Science
Lab 4: Multiple and Polynomial Regression¶
Harvard University
Fall 2018
Instructors: Pavlos Protopapas and Kevin Rader
Lab Instructor: Rahul Dave
Authors: Rahul Dave, David Sondak, Will Claybaugh, Pavlos Protopapas
## RUN THIS CELL TO GET THE RIGHT FORMATTING
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
Table of Contents¶
- Learning Goals
- Polynomial Regression, and Revisiting the Cab Data
- Multiple regression and exploring the Football data
- A nice trick for forward-backwards
Learning Goals¶
After this lab, you should be able to
- Implement arbitrary multiple regression models in both SK-learn and Statsmodels
- Interpret the coefficent estimates produced by each model, including transformed and dummy variables
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.api import OLS
from sklearn import preprocessing
from sklearn.preprocessing import PolynomialFeatures
from sklearn.metrics import r2_score
from sklearn.model_selection import train_test_split
from pandas.plotting import scatter_matrix
import seaborn as sns
%matplotlib inline
statsmodels
is focused on the inference task: guess good values for the betas and discuss how certain you are in those answers.
sklearn
is focused on the prediction task: given [new] data, guess what the response value is. As a result, statsmodels has lots of tools to discuss confidence, but isn't great at dealing with test sets. Sklearn is great at test sets and validations, but can't really discuss uncertainty in the parameters or predictions. In short:
- sklearn is about putting a line through it and predicting new values using that line. If the line gives good predictions on the test set, who cares about anything else?
- statsmodels assumes more about how the data were generated, and (if the assumptions are correct) can tell you about uncertainty in the results
Some terms¶
- R-squared: An interpretable summary of how well the model did. 1 is perfect, 0 is a trivial baseline model, negative is worse than the trivial model
- F-statistic: A value testing whether we're likely to see these results (or even stronger ones) if none of the predictors actually mattered.
- Prob (F-statistic): The probability that we'd see these results (or even stronger ones) if none of the predictors actually mattered. If this probability is small then either A) some combination of predictors actually matters or B) something rather unlikely has happened
- coef: The estimate of each beta. This has several sub-components:
- std err: The amount we'd expect this value to wiggle if we re-did the data collection and re-ran our model. More data tends to make this wiggle smaller, but sometimes the collected data just isn't enough to pin down a particular value.
- t and P>|t|: similar to the F-statistic, these measure the probability of seeing coefficients this big (or even bigger) if the given variable didn't actually matter. Small probability doesn't necessarily mean the value matters
- [0.025 0.975]: Endpoints of the 95% confidence interval. This is a interval drawn in a clever way and which gives an idea of where the true beta value might plausibly live. (If you want to understand why "there's a 95% chance the true beta is in the interval" is wrong, start a chat with Will : )
Part 1: Polynomial Regression, and Revisiting the Cab Data¶
# read in the data, break into train and test
cab_df = pd.read_csv("data/dataset_1.txt")
train_data, test_data = train_test_split(cab_df, test_size=.2, random_state=42)
cab_df.head()
cab_df.shape
# do some data cleaning
X_train = train_data['TimeMin'].values.reshape(-1,1)/60
y_train = train_data['PickupCount'].values
X_test = test_data['TimeMin'].values.reshape(-1,1)/60
y_test = test_data['PickupCount'].values
def plot_cabs(cur_model, poly_transformer=None):
# build the x values for the prediction line
x_vals = np.arange(0,24,.1).reshape(-1,1)
# if needed, build the design matrix
if poly_transformer:
design_mat = poly_transformer.fit_transform(x_vals)
else:
design_mat = x_vals
# make the prediction at each x value
prediction = cur_model.predict(design_mat)
# plot the prediction line, and the test data
plt.plot(x_vals,prediction, color='k', label="Prediction")
plt.scatter(X_test, y_test, label="Test Data")
# label your plots
plt.ylabel("Number of Taxi Pickups")
plt.xlabel("Time of Day (Hours Past Midnight)")
plt.legend()
plt.show()
from sklearn.linear_model import LinearRegression
fitted_cab_model0 = LinearRegression().fit(X_train, y_train)
plot_cabs(fitted_cab_model0)
fitted_cab_model0.score(X_test, y_test)
We can see that there's still a lot of variation in cab pickups that's not being caught by a linear fit. And the linear fit is predicting massively more pickups at 11:59pm than at 12:00am. However, we can add columns to our design matrix for $TimeMin^2$ and $TimeMin^3$ and so on, allowing a wigglier polynomial that will better fit the data.
We'll be using sklearn's PolynomialFeatures
to take some of the tedium out of building the new design matrix. In fact, if all we want is a formula like $y \approx \beta_0 + \beta_1 x + \beta_2 x^2 + ...$ it will directly return the new design matrix.
transformer_3 = PolynomialFeatures(3, include_bias=False)
new_features = transformer_3.fit_transform(X_train)
new_features
A few notes on PolynomialFeatures
:
- The interface is a bit strange.
PolynomialFeatures
is a 'transformer' in sklearn. We'll be using several transformers that learn a transformation on the training data and then apply that transformation on future data. On these (more typical) transformers it makes sense to have a.fit()
and a separate.transform()
. With PolynomialFeatures, the.fit()
is pretty trivial, and we often fit and transform in one command, as seen above. - You rarely want to
include_bias
(a column of all 1s), since sklearn will add it automatically and statsmodels can justadd_constant
right before you fit to the design matrix - If you want polynomial features for a several different variables, you should call
.fit_transform()
separately on each column and append all the results to the design matrix (unless you also want interaction terms between the newly-created features). Seenp.concatenate
for joining arrays.
fitted_cab_model3 = LinearRegression().fit(new_features, y_train)
plot_cabs(fitted_cab_model3, transformer_3)
Questions:
- Calculate the polynomial model's $R^2$ performance on the test set.
- Does the polynomial model improve on the purely linear model?
- Make a residual plot for the polynomial model. What does this plot tell us about the model?
your answer here
# your code here
# your code here
# your code here
Other features¶
Polynomial features are not the only constucted features that help fit the data. Because these data have a 24 hour cycle, we may want to build features that follow such a cycle. For example, $sin(24\frac{x}{2\pi})$, $sin(12\frac{x}{2\pi})$, $sin(8\frac{x}{2\pi})$. Other feature transformations are appropriate to other types of data. For instance certain feature transformations have been developed for geographical data.
Part 2: Multiple regression and exploring the Football data¶
Let's move on to a truly interesting dataset. The data imported below were scraped by Shubham Maurya and record various facts about players in the English Premier League. Our goal will be to fit models that predict the players' market value (what the player could earn when hired by a new team), as estimated by transfermrkt.com.
name
: Name of the player
club
: Club of the player
age
: Age of the player
position
: The usual position on the pitch
position_cat
: 1 for attackers, 2 for midfielders, 3 for defenders, 4 for goalkeepers
market_value
: As on transfermrkt.com on July 20th, 2017
page_views
: Average daily Wikipedia page views from September 1, 2016 to May 1, 2017
fpl_value
: Value in Fantasy Premier League as on July 20th, 2017
fpl_sel
: % of FPL players who have selected that player in their team
fpl_points
: FPL points accumulated over the previous season
region
: 1 for England, 2 for EU, 3 for Americas, 4 for Rest of World
nationality
: Player's nationality
new_foreign
: Whether a new signing from a different league, for 2017/18 (till 20th July)
age_cat
: a categorical version of the Age feature
club_id
: a numerical version of the Club feature
big_club
: Whether one of the Top 6 clubs
new_signing
: Whether a new signing for 2017/18 (till 20th July)
As always, we first import, verify, split, and explore the data.
Part 2.1: Import and verification and grouping¶
league_df = pd.read_csv("data/league_data.txt")
print(league_df.dtypes)
league_df.head()
league_df.shape
league_df.describe()
(Stratified) train/test split¶
We want to make sure that the training and test data have appropriate representation of each region; it would be bad for the training data to entirely miss a region. This is especially important because some regions are rather rare.
- Use the
train_test_split
function to and its 'stratify' argument to split the data, keeping equal representation of each region (This will not work out of the box on this data. Deal with the resulting issue). - Deal with the issue you encountered above.
- How did you deal with the error generated by
train_test_split
? How did you justify your action?
your answer here:
# your code here
train_data.shape, test_data.shape
Now that we won't be peeking at the test set, let's explore and look for patterns! We'll introduce a number of useful pandas and numpy functions along the way.
Groupby¶
Pandas' .groupby()
function is a wonderful tool for data analysis. It allows us to analyze each of several subgroups.
Many times, .groupby()
is combined with .agg()
to get a summary statistic for each subgroup. For instance: What is the average market value, median page views, and maximum fpl for each player position?
train_data.groupby('position').agg({
'market_value': np.mean,
'page_views': np.median,
'fpl_points': np.max
})
train_data.position.unique()
train_data.groupby(['big_club', 'position']).agg({
'market_value': np.mean,
'page_views': np.mean,
'fpl_points': np.mean
})
Part 2.2: Linear regression on the football data¶
This section of the lab focuses on fitting a model to the football data and interpreting the model results. The model we'll use is
$$\text{market_value} \approx \beta_0 + \beta_1\text{fpl_points} + \beta_2\text{age} + \beta_3\text{age}^2 + \beta_4log_2\left(\text{page_views}\right) + \beta_5\text{new_signing} +\beta_6\text{big_club} + \beta_7\text{position_cat}$$We're including a 2nd degree polynomial in age because we expect pay to increase as a player gains experience, but then decrease as they continue aging. We're taking the log of page views because they have such a large, skewed range and the transformed variable will have fewer outliers that could bias the line. We choose the base of the log to be 2 just to make interpretation cleaner.
- Build a design matrix function and fit this model to the training data. How good is the overall model?
- Interpret the regression model. What is the meaning of the coefficient for:
- age and age$^2$
- $log_2($page_views$)$
- big_club
- What should a player do in order to improve their market value? How many page views should a player go get to increase their market value by 10?
# your code here
# your code here
your answer here
agecoef = fitted_model_1.params.age
age2coef = fitted_model_1.params.age_squared
x_vals = np.linspace(-100,100,1000)
y_vals = agecoef*x_vals +age2coef*x_vals**2
plt.plot(x_vals, y_vals)
plt.title("Effect of Age")
plt.xlabel("Age")
plt.ylabel("Contribution to Predicted Market Value")
plt.show()
Part 2.3: Turning Categorical Variables into multiple binary variables¶
Of course, we have an error in how we've included player position. Even though the variable is numeric (1,2,3,4) and the model runs without issue, the value we're getting back is garbage. The interpretation, such as it is, is that there is an equal effect of moving from position category 1 to 2, from 2 to 3, and from 3 to 4, and that this effect is about -.61.
In reality, we don't expect moving from one position category to another to be equivalent, nor for a move from category 1 to category 3 to be twice as important as a move from category 1 to category 2. We need to introduce better features to model this variable.
We'll use pd.get_dummies
to do the work for us.
train_design_recoded = pd.get_dummies(train_design, columns=['position_cat'], drop_first=True)
test_design_recoded = pd.get_dummies(test_design, columns=['position_cat'], drop_first=True)
train_design_recoded.head()
We've removed the original position_cat
column and created three new ones.
Why only three new columns?¶
Why does pandas give us the option to drop the first category?
- If we're fitting a model without a constant, should we have three dummy columns or four dummy columns?
- Fit a model and interpret the coefficient of
position_cat_2
.
resu = OLS(y_train, train_design_recoded).fit()
resu.summary()
r2_score(y_test, resu.predict(test_design_recoded))
train_design_recoded.shape, y_train.shape
Answers:
- If our model does not have a constant, we must include all four dummy variable columns. If we drop one, we're not modeling any effect of being in that category, and effectively assuming the dropped category's effect is 0.
- Being in position 2 (instead of position 1) has an impact between -1.54 and +2.38 on a player's market value. Since we're using an intercept, the dropped category becomes the baseline and the effect of any dummy variable is the effect of being in that category instead of the baseline category.
Part 3: A nice trick for forward-backwards¶
XOR (operator ^) is a logical operation that only returns true when input differ. We can use it to implement forward-or-backwards selection when we want to keep track of whet predictors are "left" from a given list of predictors.
The set analog is "symmetric difference". From the python docs:
s.symmetric_difference(t) s ^ t new set with elements in either s or t but not both
set() ^ set([1,2,3])
set([1]) ^ set([1,2,3])
set([1, 2]) ^ set([1,2,3])
Outline a step-forwards algorithm which uses this idea
your answer here