CS-109A Introduction to Data Science

Lab 4: Multiple and Polynomial Regression (September 26, 2019 version)

Harvard University
Fall 2019
Instructors: Pavlos Protopapas, Kevin Rader, and Chris Tanner
Lab Instructor: Chris Tanner and Eleni Kaxiras
Authors: Rahul Dave, David Sondak, Will Claybaugh, Pavlos Protopapas, Chris Tanner


In [1]:
## RUN THIS CELL TO GET THE RIGHT FORMATTING 
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
Out[1]:

Table of Contents

  1. Learning Goals / Tip of the Week / Terminology
  2. Training/Validation/Testing Splits (slides + interactive warm-up)
  3. Polynomial Regression, and Revisiting the Cab Data
  4. Multiple regression and exploring the Football data
  5. A nice trick for forward-backwards
  6. Cross-validation

Learning Goals

After this lab, you should be able to

  • Explain the difference between train/validation/test data and WHY we have each.
  • Implement cross-validation on a dataset
  • Implement arbitrary multiple regression models in both SK-learn and Statsmodels.
  • Interpret the coefficent estimates produced by each model, including transformed and dummy variables
In [2]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

import statsmodels.api as sm
from statsmodels.api import OLS

from sklearn import preprocessing
from sklearn.preprocessing import PolynomialFeatures
from sklearn.metrics import r2_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler

from pandas.plotting import scatter_matrix

import seaborn as sns


%matplotlib inline

Extra Tip of the Week

Within your terminal (aka console aka command prompt), most shell environments support useful shortcuts:

  • press the [up arrow] to navigate through your most recent commands
  • press [CTRL + A] to go to the beginning of the line
  • press [CTRL + E] to go to the end of the line
  • press [CTRL + K] to clear the line
  • type `history` to see the last commands you've run

Terminology

Say we have input features $X$, which via some function $f()$, approximates outputs $Y$. That is, $Y = f(X) + \epsilon$ (where $\epsilon$ represents our unmeasurable variation (i.e., irreducible error).

  • Inference: estimates the function $f$, but the goal isn't to make predictions for $Y$; rather, it is more concerned with understanding the relationship between $X$ and $Y$.
  • Prediction: estimates the function $f$ with the goal of making accurate $Y$ predictions for some unseen $X$.

We have recently used two highly popular, useful libraries, statsmodels and sklearn.

statsmodels is mostly focused on the inference task. It aims to make good estimates for $f()$ (via solving for our $\beta$'s), and it provides expansive details about its certainty. It provides lots of tools to discuss confidence, but isn't great at dealing with test sets.

sklearn is mostly focused on the prediction task. It aims to make a well-fit line to our input data $X$, so as to make good $Y$ predictions for some unseen inputs $X$. It provides a shallower analysis of our variables. In other words, sklearn is great at test sets and validations, but it can't really discuss uncertainty in the parameters or predictions.

  • R-squared: An interpretable summary of how well the model did. 1 is perfect, 0 is a trivial baseline model based on the mean $y$ value, and negative is worse than the trivial model.
  • F-statistic: A value testing whether we're likely to see these results (or even stronger ones) if none of the predictors actually mattered.
  • Prob (F-statistic): The probability that we'd see these results (or even stronger ones) if none of the predictors actually mattered. If this probability is small then either A) some combination of predictors actually matters or B) something rather unlikely has happened
  • coef: The estimate of each beta. This has several sub-components:
    • std err: The amount we'd expect this value to wiggle if we re-did the data collection and re-ran our model. More data tends to make this wiggle smaller, but sometimes the collected data just isn't enough to pin down a particular value.
    • t and P>|t|: similar to the F-statistic, these measure the probability of seeing coefficients this big (or even bigger) if the given variable didn't actually matter. Small probability doesn't necessarily mean the value matters
    • [0.025 0.975]: Endpoints of the 95% confidence interval. This is a interval drawn in a clever way and which gives an idea of where the true beta value might plausibly live. (If you want to understand why "there's a 95% chance the true beta is in the interval" is wrong, start a chat with Will : )

Part 2: Polynomial Regression, and Revisiting the Cab Data

Polynomial regression uses a linear model to estimate a non-linear function (i.e., a function with polynomial terms). For example:

$y = \beta_0 + \beta_1x_i + \beta_1x_i^{2}$

It is a linear model because we are still solving a linear equation (the linear aspect refers to the beta coefficients).

In [3]:
# read in the data, break into train and test
cab_df = pd.read_csv("../data/dataset_1.txt")
train_data, test_data = train_test_split(cab_df, test_size=.2, random_state=42)
cab_df.head()
Out[3]:
TimeMin PickupCount
0 860.0 33.0
1 17.0 75.0
2 486.0 13.0
3 300.0 5.0
4 385.0 10.0
In [4]:
cab_df.shape
Out[4]:
(1250, 2)
In [5]:
# do some data cleaning
X_train = train_data['TimeMin'].values.reshape(-1,1)/60 # transforms it to being hour-based
y_train = train_data['PickupCount'].values

X_test = test_data['TimeMin'].values.reshape(-1,1)/60 # hour-based
y_test = test_data['PickupCount'].values

def plot_cabs(cur_model, poly_transformer=None):
    
    # build the x values for the prediction line
    x_vals = np.arange(0,24,.1).reshape(-1,1)
    
    # optionally use the passed-in transformer
    if poly_transformer != None:
        dm = poly_transformer.fit_transform(x_vals)
    else:
        dm = x_vals
        
    # make the prediction at each x value
    prediction = cur_model.predict(dm)
    
    # plot the prediction line, and the test data
    plt.plot(x_vals,prediction, color='k', label="Prediction")
    plt.scatter(X_test, y_test, label="Test Data")

    # label your plots
    plt.ylabel("Number of Taxi Pickups")
    plt.xlabel("Time of Day (Hours Past Midnight)")
    plt.legend()
    plt.show()
In [6]:
from sklearn.linear_model import LinearRegression
fitted_cab_model0 = LinearRegression().fit(X_train, y_train)
plot_cabs(fitted_cab_model0)
In [7]:
fitted_cab_model0.score(X_test, y_test)
Out[7]:
0.240661535615741
Exercise

Questions:

  1. The above code uses sklearn. As more practice, and to help you stay versed in both libraries, perform the same task (fit a linear regression line) using statsmodels and report the $r^2$ score. Is it the same value as what sklearn reports, and is this the expected behavior?
In [9]:
#### EXERCISE: write code here (feel free to work with a partner)

We can see that there's still a lot of variation in cab pickups that's not being captured by a linear fit. Further, the linear fit is predicting massively more pickups at 11:59pm than at 12:00am. This is a bad property, and it's the conseqeuence of having a straight line with a non-zero slope. However, we can add columns to our data for $TimeMin^2$ and $TimeMin^3$ and so on, allowing a curvy polynomial line to hopefully fit the data better.

We'll be using sklearn's PolynomialFeatures() function to take some of the tedium out of building the expanded input data. In fact, if all we want is a formula like $y \approx \beta_0 + \beta_1 x + \beta_2 x^2 + ...$, it will directly return a new copy of the data in this format!

In [10]:
transformer_3 = PolynomialFeatures(3, include_bias=False)
expanded_train = transformer_3.fit_transform(X_train) # TRANSFORMS it to polynomial features
pd.DataFrame(expanded_train).describe() # notice that the columns now contain x, x^2, x^3 values
Out[10]:
0 1 2
count 1000.000000 1000.000000 1000.000000
mean 11.717217 182.833724 3234.000239
std 6.751751 167.225711 3801.801966
min 0.066667 0.004444 0.000296
25% 6.100000 37.210833 226.996222
50% 11.375000 129.390694 1471.820729
75% 17.437500 304.066458 5302.160684
max 23.966667 574.401111 13766.479963

A few notes on PolynomialFeatures:

  • The interface is a bit strange. PolynomialFeatures is a 'transformer' in sklearn. We'll be using several transformers that learn a transformation on the training data, and then we will apply those transformations on future data. With PolynomialFeatures, the .fit() is pretty trivial, and we often fit and transform in one command, as seen above with `.fit_transform().
  • You rarely want to include_bias (a column of all 1's), since sklearn will add it automatically. Remember, when using statsmodels, you can just .add_constant() right before you fit the data.
  • If you want polynomial features for a several different variables (i.e., multinomial regression), you should call .fit_transform() separately on each column and append all the results to a copy of the data (unless you also want interaction terms between the newly-created features). See np.concatenate() for joining arrays.
In [11]:
fitted_cab_model3 = LinearRegression().fit(expanded_train, y_train)
print("fitting expanded_train:", expanded_train)
plot_cabs(fitted_cab_model3, transformer_3)
fitting expanded_train: [[6.73333333e+00 4.53377778e+01 3.05274370e+02]
 [2.18333333e+00 4.76694444e+00 1.04078287e+01]
 [1.41666667e+00 2.00694444e+00 2.84317130e+00]
 ...
 [1.96666667e+01 3.86777778e+02 7.60662963e+03]
 [1.17333333e+01 1.37671111e+02 1.61534104e+03]
 [1.42000000e+01 2.01640000e+02 2.86328800e+03]]
Exercise

Questions:

  1. Calculate the polynomial model's $R^2$ performance on the test set.
  2. Does the polynomial model improve on the purely linear model?
  3. Make a residual plot for the polynomial model. What does this plot tell us about the model?
In [12]:
# ANSWER 1
expanded_test = transformer_3.fit_transform(X_test)
print("Test R-squared:", fitted_cab_model3.score(expanded_test, y_test))
# NOTE 1: unlike statsmodels' r2_score() function, sklearn has a .score() function
# NOTE 2: fit_transform() is a nifty function that transforms the data, then fits it
Test R-squared: 0.33412512570778774
In [13]:
# ANSWER 2: does it?
In [14]:
# ANSWER 3 (class discussion about the residuals)
x_matrix = transformer_3.fit_transform(X_train)

prediction = fitted_cab_model3.predict(x_matrix)
residual = y_train - prediction
plt.scatter(X_train, residual, label="Residual")
plt.axhline(0, color='k')

plt.title("Residuals for the Cubic Model")
plt.ylabel("Residual Number of Taxi Pickups")
plt.xlabel("Time of Day (Hours Past Midnight)")
plt.legend()
Out[14]:

Other features

Polynomial features are not the only constucted features that help fit the data. Because these data have a 24 hour cycle, we may want to build features that follow such a cycle. For example, $sin(24\frac{x}{2\pi})$, $sin(12\frac{x}{2\pi})$, $sin(8\frac{x}{2\pi})$. Other feature transformations are appropriate to other types of data. For instance certain feature transformations have been developed for geographical data.

Scaling Features

When using polynomials, we are explicitly trying to use the higher-order values for a given feature. However, sometimes these polynomial features can take on values that are drastically large, making it difficult for the system to learn an appropriate bias weight due to its large values and potentially large variance. To counter this, sometimes one may be interested in scaling the values for a given feature.

For our ongoing taxi-pickup example, using polynomial features improved our model. If we wished to scale the features, we could use sklearn's StandardScaler() function:

In [15]:
# SCALES THE EXPANDED/POLY TRANSFORMED DATA
# we don't need to convert to a pandas dataframe, but it can be useful for scaling select columns
train_copy = pd.DataFrame(expanded_train.copy())
test_copy = pd.DataFrame(expanded_test.copy())

# Fit the scaler on the training data
scaler = StandardScaler().fit(train_copy)

# Scale both the test and training data. 
train_scaled = scaler.transform(expanded_train)
test_scaled = scaler.transform(expanded_test)

# we could optionally run a new regression model on this scaled data
fitted_scaled_cab = LinearRegression().fit(train_scaled, y_train)
fitted_scaled_cab.score(test_scaled, y_test)
Out[15]:
0.33412512570778274

Part 3: Multiple regression and exploring the Football (aka soccer) data

Let's move on to a different dataset! The data imported below were scraped by Shubham Maurya and record various facts about players in the English Premier League. Our goal will be to fit models that predict the players' market value (what the player could earn when hired by a new team), as estimated by https://www.transfermarkt.us.

name: Name of the player
club: Club of the player
age : Age of the player
position : The usual position on the pitch
position_cat : 1 for attackers, 2 for midfielders, 3 for defenders, 4 for goalkeepers
market_value : As on transfermrkt.com on July 20th, 2017
page_views : Average daily Wikipedia page views from September 1, 2016 to May 1, 2017
fpl_value : Value in Fantasy Premier League as on July 20th, 2017
fpl_sel : % of FPL players who have selected that player in their team
fpl_points : FPL points accumulated over the previous season
region: 1 for England, 2 for EU, 3 for Americas, 4 for Rest of World
nationality: Player's nationality
new_foreign: Whether a new signing from a different league, for 2017/18 (till 20th July)
age_cat: a categorical version of the Age feature
club_id: a numerical version of the Club feature
big_club: Whether one of the Top 6 clubs
new_signing: Whether a new signing for 2017/18 (till 20th July)

As always, we first import, verify, split, and explore the data.

Part 3.1: Import and verification and grouping

In [16]:
league_df = pd.read_csv("../data/league_data.txt")
print(league_df.dtypes)

# QUESTION: what would you guess is the mean age? mean salary?
#league_df.head()
name             object
club             object
age               int64
position         object
position_cat      int64
market_value    float64
page_views        int64
fpl_value       float64
fpl_sel          object
fpl_points        int64
region          float64
nationality      object
new_foreign       int64
age_cat           int64
club_id           int64
big_club          int64
new_signing       int64
dtype: object
In [17]:
league_df.shape
Out[17]:
(461, 17)
In [18]:
#league_df.describe()

(Stratified) train/test split

We want to make sure that the training and test data have appropriate representation of each region; it would be bad for the training data to entirely miss a region. This is especially important because some regions are rather rare.

Exercise

Questions:

  1. Use the train_test_split() function, while (a) ensuring the test size is 20% of the data, and; (2) using 'stratify' argument to split the data (look up documentation online), keeping equal representation of each region. This doesn't work by default, correct? What is the issue?
  2. Deal with the issue you encountered above. Hint: you may find numpy's .isnan() and panda's .dropna() functions useful!
  3. How did you deal with the error generated by train_test_split? How did you justify your action?

your answer here:

In [33]:
# EXERCISE: feel free to work with a partner
In [25]:
train_data.shape, test_data.shape
Out[25]:
((1000, 2), (250, 2))

Now that we won't be peeking at the test set, let's explore and look for patterns! We'll introduce a number of useful pandas and numpy functions along the way.

Groupby

Pandas' .groupby() function is a wonderful tool for data analysis. It allows us to analyze each of several subgroups.

Many times, .groupby() is combined with .agg() to get a summary statistic for each subgroup. For instance: What is the average market value, median page views, and maximum fpl for each player position?

In [23]:
train_data.groupby('position').agg({
    'market_value': np.mean,
    'page_views': np.median,
    'fpl_points': np.max
})
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
 in 
----> 1 train_data.groupby('position').agg({
      2     'market_value': np.mean,
      3     'page_views': np.median,
      4     'fpl_points': np.max
      5 })

/usr/local/lib/python3.7/site-packages/pandas/core/generic.py in groupby(self, by, axis, level, as_index, sort, group_keys, squeeze, observed, **kwargs)
   7894             squeeze=squeeze,
   7895             observed=observed,
-> 7896             **kwargs
   7897         )
   7898 

/usr/local/lib/python3.7/site-packages/pandas/core/groupby/groupby.py in groupby(obj, by, **kwds)
   2476         raise TypeError("invalid type: {}".format(obj))
   2477 
-> 2478     return klass(obj, by, **kwds)

/usr/local/lib/python3.7/site-packages/pandas/core/groupby/groupby.py in __init__(self, obj, keys, axis, level, grouper, exclusions, selection, as_index, sort, group_keys, squeeze, observed, **kwargs)
    389                 sort=sort,
    390                 observed=observed,
--> 391                 mutated=self.mutated,
    392             )
    393 

/usr/local/lib/python3.7/site-packages/pandas/core/groupby/grouper.py in _get_grouper(obj, key, axis, level, sort, observed, mutated, validate)
    619                 in_axis, name, level, gpr = False, None, gpr, None
    620             else:
--> 621                 raise KeyError(gpr)
    622         elif isinstance(gpr, Grouper) and gpr.key is not None:
    623             # Add key to exclusions

KeyError: 'position'
In [22]:
train_data.position.unique()
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
 in 
----> 1 train_data.position.unique()

/usr/local/lib/python3.7/site-packages/pandas/core/generic.py in __getattr__(self, name)
   5178             if self._info_axis._can_hold_identifiers_and_holds_name(name):
   5179                 return self[name]
-> 5180             return object.__getattribute__(self, name)
   5181 
   5182     def __setattr__(self, name, value):

AttributeError: 'DataFrame' object has no attribute 'position'
In [26]:
train_data.groupby(['big_club', 'position']).agg({
    'market_value': np.mean,
    'page_views': np.mean,
    'fpl_points': np.mean
})
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
 in 
----> 1 train_data.groupby(['big_club', 'position']).agg({
      2     'market_value': np.mean,
      3     'page_views': np.mean,
      4     'fpl_points': np.mean
      5 })

/usr/local/lib/python3.7/site-packages/pandas/core/generic.py in groupby(self, by, axis, level, as_index, sort, group_keys, squeeze, observed, **kwargs)
   7894             squeeze=squeeze,
   7895             observed=observed,
-> 7896             **kwargs
   7897         )
   7898 

/usr/local/lib/python3.7/site-packages/pandas/core/groupby/groupby.py in groupby(obj, by, **kwds)
   2476         raise TypeError("invalid type: {}".format(obj))
   2477 
-> 2478     return klass(obj, by, **kwds)

/usr/local/lib/python3.7/site-packages/pandas/core/groupby/groupby.py in __init__(self, obj, keys, axis, level, grouper, exclusions, selection, as_index, sort, group_keys, squeeze, observed, **kwargs)
    389                 sort=sort,
    390                 observed=observed,
--> 391                 mutated=self.mutated,
    392             )
    393 

/usr/local/lib/python3.7/site-packages/pandas/core/groupby/grouper.py in _get_grouper(obj, key, axis, level, sort, observed, mutated, validate)
    619                 in_axis, name, level, gpr = False, None, gpr, None
    620             else:
--> 621                 raise KeyError(gpr)
    622         elif isinstance(gpr, Grouper) and gpr.key is not None:
    623             # Add key to exclusions

KeyError: 'big_club'
Exercise

Question:

  1. Notice that the .groupby() function above takes a list of two column names. Does the order matter? What happens if we switch the two so that 'position' is listed before 'big_club'?
In [27]:
# EXERCISE: feel free to work with a partner

Part 3.2: Linear regression on the football data

This section of the lab focuses on fitting a model to the football (soccer) data and interpreting the model results. The model we'll use is

$$\text{market_value} \approx \beta_0 + \beta_1\text{fpl_points} + \beta_2\text{age} + \beta_3\text{age}^2 + \beta_4log_2\left(\text{page_views}\right) + \beta_5\text{new_signing} +\beta_6\text{big_club} + \beta_7\text{position_cat}$$

We're including a 2nd degree polynomial in age because we expect pay to increase as a player gains experience, but then decrease as they continue aging. We're taking the log of page views because they have such a large, skewed range and the transformed variable will have fewer outliers that could bias the line. We choose the base of the log to be 2 just to make interpretation cleaner.

Exercise

Questions:

  1. Build the data and fit this model to it. How good is the overall model?
  2. Interpret the regression model. What is the meaning of the coefficient for:
    • age and age$^2$
    • $log_2($page_views$)$
    • big_club
  3. What should a player do in order to improve their market value? How many page views should a player go get to increase their market value by 10?
In [29]:
# Q1: we'll do most of it for you ...
y_train = train_data['market_value']
y_test = test_data['market_value']
def build_football_data(df):
    x_matrix = df[['fpl_points','age','new_signing','big_club','position_cat']].copy()
    x_matrix['log_views'] = np.log2(df['page_views'])
    
    # WRITE CODE FOR CREATING THE AGE SQUARED COLUMN
    ####
    
    # OPTIONALLY WRITE CODE to adjust the ordering of the columns, just so that it corresponds with the equation above
    ####
    
    # add a constant
    x_matrix = sm.add_constant(x_matrix)
    
    return x_matrix

# use build_football_data() to transform both the train_data and test_data
train_transformed = build_football_data(train_data)
test_transformed = build_football_data(test_data)

fitted_model_1 = OLS(endog= y_train, exog=train_transformed, hasconst=True).fit()
fitted_model_1.summary()

# WRITE CODE TO RUN r2_score(), then answer the above question about the overall goodness of the model
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
/usr/local/lib/python3.7/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
   2889             try:
-> 2890                 return self._engine.get_loc(key)
   2891             except KeyError:

pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()

pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()

KeyError: 'market_value'

During handling of the above exception, another exception occurred:

KeyError                                  Traceback (most recent call last)
 in 
      1 # Q1: we'll do most of it for you ...
----> 2 y_train = train_data['market_value']
      3 y_test = test_data['market_value']
      4 def build_football_data(df):
      5     x_matrix = df[['fpl_points','age','new_signing','big_club','position_cat']].copy()

/usr/local/lib/python3.7/site-packages/pandas/core/frame.py in __getitem__(self, key)
   2973             if self.columns.nlevels > 1:
   2974                 return self._getitem_multilevel(key)
-> 2975             indexer = self.columns.get_loc(key)
   2976             if is_integer(indexer):
   2977                 indexer = [indexer]

/usr/local/lib/python3.7/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
   2890                 return self._engine.get_loc(key)
   2891             except KeyError:
-> 2892                 return self._engine.get_loc(self._maybe_cast_indexer(key))
   2893         indexer = self.get_indexer([key], method=method, tolerance=tolerance)
   2894         if indexer.ndim > 1 or indexer.size > 1:

pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()

pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()

KeyError: 'market_value'
In [31]:
# Q2: let's use the age coefficients to show the effect of age has on one's market value;
# we can get the age and age^2 coefficients via:
agecoef = fitted_model_1.params.age
age2coef = fitted_model_1.params.age_squared

# let's set our x-axis (corresponding to age) to be a wide range from -100 to 100, 
# just to see a grand picture of the function
x_vals = np.linspace(-100,100,1000)
y_vals = agecoef*x_vals +age2coef*x_vals**2

# WRITE CODE TO PLOT x_vals vs y_vals
plt.plot(x_vals, y_vals)
plt.title("Effect of Age")
plt.xlabel("Age")
plt.ylabel("Contribution to Predicted Market Value")
plt.show()

# Q2A: WHAT HAPPENS IF WE USED ONLY AGE (not AGE^2) in our model (what's the r2?); make the same plot of age vs market value
# Q2B: WHAT HAPPENS IF WE USED ONLY AGE^2 (not age) in our model (what's the r2?); make the same plot of age^2 vs market value
# Q2C: PLOT page views vs market value

# EXERCISE: WRITE CODE HERE
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
 in 
      1 # Q2: let's use the age coefficients to show the effect of age has on one's market value;
      2 # we can get the age and age^2 coefficients via:
----> 3 agecoef = fitted_model_1.params.age
      4 age2coef = fitted_model_1.params.age_squared
      5 

NameError: name 'fitted_model_1' is not defined

Part 3.3: Turning Categorical Variables into multiple binary variables

Of course, we have an error in how we've included player position. Even though the variable is numeric (1,2,3,4) and the model runs without issue, the value we're getting back is garbage. The interpretation, such as it is, is that there is an equal effect of moving from position category 1 to 2, from 2 to 3, and from 3 to 4, and that this effect is probably between -0.5 to -1 (depending on your run).

In reality, we don't expect moving from one position category to another to be equivalent, nor for a move from category 1 to category 3 to be twice as important as a move from category 1 to category 2. We need to introduce better features to model this variable.

We'll use pd.get_dummies to do the work for us.

In [ ]:
train_design_recoded = pd.get_dummies(train_transformed, columns=['position_cat'], drop_first=True)
test_design_recoded = pd.get_dummies(test_transformed, columns=['position_cat'], drop_first=True)

train_design_recoded.head()

We've removed the original position_cat column and created three new ones.

Why only three new columns?

Why does pandas give us the option to drop the first category?

Exercise

Questions:

  1. If we're fitting a model without a constant, should we have three dummy columns or four dummy columns?
  2. Fit a model on the new, recoded data, then interpret the coefficient of position_cat_2.
In [32]:
# FEEL FREE ETO WORK WITH A PARTNER

Answers:

Part 4: A nice trick for forward-backwards

XOR (operator ^) is a logical operation that only returns true when input differ. We can use it to implement forward-or-backwards selection when we want to keep track of whet predictors are "left" from a given list of predictors.

The set analog is "symmetric difference". From the python docs:

s.symmetric_difference(t) s ^ t new set with elements in either s or t but not both

In [ ]:
set() ^ set([1,2,3])
In [ ]:
set([1]) ^ set([1,2,3])
In [ ]:
set([1, 2]) ^ set([1,2,3])
Exercise

Outline a step-forwards algorithm which uses this idea

BONUS EXERCISE:

We have provided a spreadsheet of Boston housing prices (data/boston_housing.csv). The 14 columns are as follows:

  1. CRIM: per capita crime rate by town
  2. ZN: proportion of residential land zoned for lots over 25,000 sq.ft.
  3. INDUS: proportion of non-retail business acres per town
  4. CHAS: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
  5. NOX: nitric oxides concentration (parts per 10 million) 1https://archive.ics.uci.edu/ml/datasets/Housing 123 20.2. Load the Dataset 124
  6. RM: average number of rooms per dwelling
  7. AGE: proportion of owner-occupied units built prior to 1940
  8. DIS: weighted distances to five Boston employment centers
  9. RAD: index of accessibility to radial highways
  10. TAX: full-value property-tax rate per \$10,000
  11. PTRATIO: pupil-teacher ratio by town
  12. B: 1000(Bk−0.63)2 where Bk is the proportion of blacks by town
  13. LSTAT: % lower status of the population
  14. MEDV: Median value of owner-occupied homes in $1000s We can see that the input attributes have a mixture of units

There are 450 observations.

Exercise

Using the above file, try your best to predict housing prices. (the 14th column) We have provided a test set data/boston_housing_test.csv but refrain from looking at the file or evaluating on it until you have finalized and trained a model.

  1. Load in the data. It is tab-delimited. Quickly look at a summary of the data to familiarize yourself with it and ensure nothing is too egregious.
  2. Use a previously-discussed function to automatically partition the data into a training and validation (aka development) set. It is up to you to choose how large these two portions should be.
  3. Train a basic model on just a subset of the features. What is the performance on the validation set?
  4. Train a basic model on all of the features. What is the performance on the validation set?
  5. Toy with the model until you feel your results are reasonably good.
  6. Perform cross-validation with said model, and measure the average performance. Are the results what you expected? Were the average results better or worse than that from your original 1 validation set?
  7. Experiment with other models, and for each, perform 10-fold cross-validation. Which model yields the best average performance? Select this as your final model.
  8. Use this model to evaulate your performance on the testing set. What is your performance (MSE)? Is this what you expected?
In [ ]: