Key Word(s): Virtual Environments, Anaconda, JupyterHub
CS109B Data Science 2: Advanced Topics in Data Science
Lab 1 - Introduction and Setup¶
Harvard University
Spring 2020
Instructors: Mark Glickman, Pavlos Protopapas, and Chris Tanner
Lab Instructors: Chris Tanner and Eleni Kaxiras
Contributors: Will Claybaugh and Eleni Kaxiras
## RUN THIS CELL TO PROPERLY HIGHLIGHT THE EXERCISES
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2019-CS109B/master/content/styles/cs109.css").text
HTML(styles)
import numpy as np
#import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
Learning Goals¶
The purpose of this lab is to get you up to speed with what you will need to run the code for CS109b.
1. Getting Class Material¶
Option 1A: Cloning the class repo and then copying the contents in a different directory so you can make changes.¶
- Open the Terminal in your computer and go to the Directory where you want to clone the repo. Then run
git clone https://github.com/Harvard-IACS/2020-CS109B.git
- If you have already cloned the repo, go inside the '/2020-CS109B/' directory and run
git pull
- If you change the notebooks and then run
git pull
your changes will be overwritten. So create aplayground
folder and copy the folder with the notebook with which you want to work there.
Option 1B: Forking the class repo¶
To get access to the code used in class you will need to clone the class repo: https://github.com/Harvard-IACS/2020-CS109B
In order not to lose any changes you have made when updating the content (pulling) from the main repo, a good practice is to fork
the repo locally. For more on this see Maddy Nakada's notes: How to Fork a Repo. NOTE: While Fork is a proper way to handle local changes, it doesn't magically solve everything -- if you edit a file that originated from our course repo (e.g., a HW notebook), and later pull from our 'upstream' repo again, any changes you make will require resolving merge conflict(s)
. Thus, if you want to safetly and easily preserve any of your changes, we recommend renaming your files and/or copying them into an independent directory within your repo.
You will need this year's repo: https://github.com/Harvard-IACS/2020-CS109B.git
2. Running code:¶
Option 2A: Managing Local Resources (supported by cs109b)¶
Use Virtual Environments: I cannot stress this enough!¶
Isolating your projects inside specific environments helps you manage dependencies and therefore keep your sanity. You can recover from mess-ups by simply deleting an environment. Sometimes certain installation of libraries conflict with one another.
In order of isolation here is what you can do: a) set up a virtual environment, b) set up a virtual machine. The two most popular tools for setting up environments are:
conda
(a package and environment manager)pip
(a Python package manager) withvirtualenv
(a tool for creating environments)
We recommend using conda
package installation and environments. conda
installs packages from the Anaconda Repository and Anaconda Cloud, whereas pip
installs packages from PyPI. Even if you are using conda
as your primary package installer and are inside a conda
environment, you can still use pip install
for those rare packages that are not included in the conda
ecosystem.
See here for more details on how to manage Conda Environments.
$ cd /2020-CS109B/content/labs/lab01/
$ conda env create -f cs109b.yml
$ conda activate cs109b
We have included the packages that you will need in the cs109b.yml
file. It should be in the same directory as this notebook.
Option 2B: Using Cloud Resources (optional)¶
Using SEAS JupyterHub (supported by cs109b)¶
Instructions for Using SEAS JupyterHub
SEAS and FAS are providing you with a platform in AWS to use for the class, accessible via the 'JupyterHub' menu link in Canvas. Between now and March 1, each student will have their own t2.medium AWS ec2 instance with 4GB CPU RAM, and 2 vCPUs. After March 1st the instances will be upgraded to p2.xlarge AWS ec2 instances with a GPU, 61GB CPU RAM, 12GB GPU RAM, 10gB disk space, and 4 vCPUs.
Most of the libraries such as keras, tensorflow, pandas, etc. are pre-installed. If a library is missing you may install it via the Terminal.
NOTE : The AWS platform is funded by SEAS and FAS for the purposes of the class. It is not running against your individual credit. You are to use it with prudence; also it is not allowed to use it for purposes not related to this course.
Help us keep this service: Make sure you stop your instance as soon as you do not need it.
Using Google Colab (on your own)¶
Google's Colab platform https://colab.research.google.com/ offers a GPU enviromnent to test your ideas, it's fast, free, with the only caveat that your files persist only for 12 hours. The solution is to keep your files in a repository and just clone it each time you use Colab.
Using AWS in the Cloud (on your own)¶
For those of you who want to have your own machines in the Cloud to run whatever you want, Amazon Web Services is a (paid) solution. For more see: https://docs.aws.amazon.com/polly/latest/dg/setting-up.html
Remember, AWS is a paid service so if you let your machine run for days you will get charged!
source: maybe Stanford's cs231n via Medium
3. Ensuring everything is installed correctly¶
Packages we will need for this class¶
Clustering:
- Sklearn - https://scikit-learn.org/stable/
- scipy - https://www.scipy.org
- gap_statistic (by Miles Granger) - https://anaconda.org/milesgranger/gap-statistic/notebook
Smoothing:
- statsmodels - https://www.statsmodels.org/
statsmodels examples: https://www.statsmodels.org/stable/examples/index.html#regression - scipy
- pyGAM - https://pygam.readthedocs.io/en/latest/
- statsmodels - https://www.statsmodels.org/
Bayes:
- pymc3 - https://docs.pymc.io
Neural Networks:
We will test that these packages load correctly in our environment.
from sklearn import datasets
iris = datasets.load_iris()
digits = datasets.load_digits()
digits.target # you should see [0, 1, 2, ..., 8, 9, 8]
from scipy import misc
import matplotlib.pyplot as plt
face = misc.face()
plt.imshow(face)
plt.show() # you should see a racoon
import statsmodels.api as sm
import statsmodels.formula.api as smf
# Load data
dat = sm.datasets.get_rdataset("Guerry", "HistData").data
dat.head()
from pygam import PoissonGAM, s, te
from pygam.datasets import chicago
from mpl_toolkits.mplot3d import Axes3D
X, y = chicago(return_X_y=True)
gam = PoissonGAM(s(0, n_splines=200) + te(3, 1) + s(2)).fit(X, y)
XX = gam.generate_X_grid(term=1, meshgrid=True)
Z = gam.partial_dependence(term=1, X=XX, meshgrid=True)
ax = plt.axes(projection='3d')
ax.plot_surface(XX[0], XX[1], Z, cmap='viridis')
import pymc3 as pm
print('Running PyMC3 v{}'.format(pm.__version__)) # you should see 'Running on PyMC3 v3.8'
Plotting¶
matplotlib
and seaborn
¶
matplotlib
- seaborn: statistical data visualization.
seaborn
works great withpandas
. It can also be customized easily. Here is the basicseaborn
tutorial: Seaborn tutorial.
Plotting a function of 2 variables using contours¶
In optimization, our objective function will often be a function of two or more variables. While it's hard to visualize a function of more than 3 variables, it's very informative to plot one of 2 variables. To do this we use contours. First we define the $x1$ and $x2$ variables and then construct their pairs using meshgrid
.
import seaborn as sn
x1 = np.linspace(-0.1, 0.1, 50)
x2 = np.linspace(-0.1, 0.1, 100)
xx, yy = np.meshgrid(x1, x2)
z = np.sqrt(xx**2+yy**2)
plt.contour(x1,x2,z);
We will be using tensorflow
and keras
¶
TensorFlow is a framework for representing complicated ML algorithms and executing them in any platform, from a phone to a distributed system using GPUs. Developed by Google Brain, TensorFlow is used very broadly today.
Keras, is a high-level API used for fast prototyping, advanced research, and production. We will use tf.keras
which is TensorFlow's implementation of the keras
API.
from __future__ import absolute_import, division, print_function, unicode_literals
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import models
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from tensorflow.keras.regularizers import l2
tf.keras.backend.clear_session() # For easy reset of notebook state.
print(tf.__version__) # You should see a >2.0.0 here!
print(tf.keras.__version__)
# Checking if our machine has NVIDIA GPUs. Mine does not..
hasGPU = tf.config.experimental_list_devices()
print(f'My computer has the following GPUs: {hasGPU}')
Submit this notebook to Canvas with the output produced. Describe below the environment in which you will be working, e.g. I have installed the environment needed locally and have tested all the code in this notebook OR/and I am using JupyterHub
---------------- your answer here