CS109B Data Science 2: Advanced Topics in Data Science

Lab 1 - Introduction and Setup

Harvard University
Spring 2020
Instructors: Mark Glickman, Pavlos Protopapas, and Chris Tanner
Lab Instructors: Chris Tanner and Eleni Kaxiras
Contributors: Will Claybaugh and Eleni Kaxiras


In [1]:
## RUN THIS CELL TO PROPERLY HIGHLIGHT THE EXERCISES
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2019-CS109B/master/content/styles/cs109.css").text
HTML(styles)
Out[1]:
In [2]:
import numpy as np
#import pandas as pd
import matplotlib.pyplot as plt

%matplotlib inline 

Learning Goals

The purpose of this lab is to get you up to speed with what you will need to run the code for CS109b.

1. Getting Class Material

Option 1A: Cloning the class repo and then copying the contents in a different directory so you can make changes.

  • Open the Terminal in your computer and go to the Directory where you want to clone the repo. Then run

git clone https://github.com/Harvard-IACS/2020-CS109B.git

  • If you have already cloned the repo, go inside the '/2020-CS109B/' directory and run

git pull

  • If you change the notebooks and then run git pull your changes will be overwritten. So create a playground folder and copy the folder with the notebook with which you want to work there.

Option 1B: Forking the class repo

To get access to the code used in class you will need to clone the class repo: https://github.com/Harvard-IACS/2020-CS109B

In order not to lose any changes you have made when updating the content (pulling) from the main repo, a good practice is to fork the repo locally. For more on this see Maddy Nakada's notes: How to Fork a Repo. NOTE: While Fork is a proper way to handle local changes, it doesn't magically solve everything -- if you edit a file that originated from our course repo (e.g., a HW notebook), and later pull from our 'upstream' repo again, any changes you make will require resolving merge conflict(s). Thus, if you want to safetly and easily preserve any of your changes, we recommend renaming your files and/or copying them into an independent directory within your repo.

You will need this year's repo: https://github.com/Harvard-IACS/2020-CS109B.git

2. Running code:

Option 2A: Managing Local Resources (supported by cs109b)

Use Virtual Environments: I cannot stress this enough!

Isolating your projects inside specific environments helps you manage dependencies and therefore keep your sanity. You can recover from mess-ups by simply deleting an environment. Sometimes certain installation of libraries conflict with one another.

In order of isolation here is what you can do: a) set up a virtual environment, b) set up a virtual machine. The two most popular tools for setting up environments are:

  • conda (a package and environment manager)
  • pip (a Python package manager) with virtualenv (a tool for creating environments)

We recommend using conda package installation and environments. conda installs packages from the Anaconda Repository and Anaconda Cloud, whereas pip installs packages from PyPI. Even if you are using conda as your primary package installer and are inside a conda environment, you can still use pip install for those rare packages that are not included in the conda ecosystem.

See here for more details on how to manage Conda Environments.

Exercise 1: Clone of Fork the CS109b git repository. Use the cs109b.yml file to create an environment:
$ cd /2020-CS109B/content/labs/lab01/
$ conda env create -f cs109b.yml
$ conda activate cs109b

We have included the packages that you will need in the cs109b.yml file. It should be in the same directory as this notebook.

Option 2B: Using Cloud Resources (optional)

Using SEAS JupyterHub (supported by cs109b)

Instructions for Using SEAS JupyterHub

SEAS and FAS are providing you with a platform in AWS to use for the class, accessible via the 'JupyterHub' menu link in Canvas. Between now and March 1, each student will have their own t2.medium AWS ec2 instance with 4GB CPU RAM, and 2 vCPUs. After March 1st the instances will be upgraded to p2.xlarge AWS ec2 instances with a GPU, 61GB CPU RAM, 12GB GPU RAM, 10gB disk space, and 4 vCPUs.

Most of the libraries such as keras, tensorflow, pandas, etc. are pre-installed. If a library is missing you may install it via the Terminal.

NOTE : The AWS platform is funded by SEAS and FAS for the purposes of the class. It is not running against your individual credit. You are to use it with prudence; also it is not allowed to use it for purposes not related to this course.

Help us keep this service: Make sure you stop your instance as soon as you do not need it.

Using Google Colab (on your own)

Google's Colab platform https://colab.research.google.com/ offers a GPU enviromnent to test your ideas, it's fast, free, with the only caveat that your files persist only for 12 hours. The solution is to keep your files in a repository and just clone it each time you use Colab.

Using AWS in the Cloud (on your own)

For those of you who want to have your own machines in the Cloud to run whatever you want, Amazon Web Services is a (paid) solution. For more see: https://docs.aws.amazon.com/polly/latest/dg/setting-up.html

Remember, AWS is a paid service so if you let your machine run for days you will get charged!
aws-dog

source: maybe Stanford's cs231n via Medium

3. Ensuring everything is installed correctly

Packages we will need for this class

We will test that these packages load correctly in our environment.

In [12]:
from sklearn import datasets
iris = datasets.load_iris()
digits = datasets.load_digits()
digits.target # you should see [0, 1, 2, ..., 8, 9, 8]
Out[12]:
array([0, 1, 2, ..., 8, 9, 8])
In [13]:
from scipy import misc
import matplotlib.pyplot as plt

face = misc.face()
plt.imshow(face)
plt.show() # you should see a racoon
In [14]:
import statsmodels.api as sm

import statsmodels.formula.api as smf

# Load data
dat = sm.datasets.get_rdataset("Guerry", "HistData").data
dat.head()
Out[14]:
dept Region Department Crime_pers Crime_prop Literacy Donations Infants Suicides MainCity ... Crime_parents Infanticide Donation_clergy Lottery Desertion Instruction Prostitutes Distance Area Pop1831
0 1 E Ain 28870 15890 37 5098 33120 35039 2:Med ... 71 60 69 41 55 46 13 218.372 5762 346.03
1 2 N Aisne 26226 5521 51 8901 14572 12831 2:Med ... 4 82 36 38 82 24 327 65.945 7369 513.00
2 3 C Allier 26747 7925 13 10973 17044 114121 2:Med ... 46 42 76 66 16 85 34 161.927 7340 298.26
3 4 E Basses-Alpes 12935 7289 46 2733 23018 14238 1:Sm ... 70 12 37 80 32 29 2 351.399 6925 155.90
4 5 E Hautes-Alpes 17488 8174 69 6962 23076 16171 1:Sm ... 22 23 64 79 35 7 1 320.280 5549 129.10

5 rows × 23 columns

In [6]:
from pygam import PoissonGAM, s, te
from pygam.datasets import chicago
from mpl_toolkits.mplot3d import Axes3D

X, y = chicago(return_X_y=True)

gam = PoissonGAM(s(0, n_splines=200) + te(3, 1) + s(2)).fit(X, y)
In [7]:
XX = gam.generate_X_grid(term=1, meshgrid=True)
Z = gam.partial_dependence(term=1, X=XX, meshgrid=True)

ax = plt.axes(projection='3d')
ax.plot_surface(XX[0], XX[1], Z, cmap='viridis')
Out[7]:
In [ ]:
import pymc3 as pm
print('Running PyMC3 v{}'.format(pm.__version__)) # you should see 'Running on PyMC3 v3.8'

Plotting

matplotlib and seaborn

Plotting a function of 2 variables using contours

In optimization, our objective function will often be a function of two or more variables. While it's hard to visualize a function of more than 3 variables, it's very informative to plot one of 2 variables. To do this we use contours. First we define the $x1$ and $x2$ variables and then construct their pairs using meshgrid.

In [21]:
import seaborn as sn
In [11]:
x1 = np.linspace(-0.1, 0.1, 50)
x2 = np.linspace(-0.1, 0.1, 100)
xx, yy = np.meshgrid(x1, x2)
z = np.sqrt(xx**2+yy**2)
plt.contour(x1,x2,z);

We will be using tensorflow and keras

TensorFlow is a framework for representing complicated ML algorithms and executing them in any platform, from a phone to a distributed system using GPUs. Developed by Google Brain, TensorFlow is used very broadly today.

Keras, is a high-level API used for fast prototyping, advanced research, and production. We will use tf.keras which is TensorFlow's implementation of the keras API.

Exercise 2: Run the following cells to make sure you have the basic libraries to do deep learning
In [3]:
from __future__ import absolute_import, division, print_function, unicode_literals

# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import models
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from tensorflow.keras.regularizers import l2

tf.keras.backend.clear_session()  # For easy reset of notebook state.

print(tf.__version__)  # You should see a >2.0.0 here!
print(tf.keras.__version__)
2.0.0
2.2.4-tf
In [8]:
# Checking if our machine has NVIDIA GPUs. Mine does not..
hasGPU = tf.config.experimental_list_devices()
print(f'My computer has the following GPUs: {hasGPU}')
My computer has the following GPUs: ['/job:localhost/replica:0/task:0/device:CPU:0']
DELIVERABLES

Submit this notebook to Canvas with the output produced. Describe below the environment in which you will be working, e.g. I have installed the environment needed locally and have tested all the code in this notebook OR/and I am using JupyterHub

---------------- your answer here