CS-109B Advanced Data Science
Lab 8: Recurrent Neural Networks¶
Harvard University
Fall 2020
Instructors: Mark Glickman, Pavlos Protopapas, and Chris Tanner
Lab Instructors: Chris Tanner and Eleni Angelaki Kaxiras
Content: Srivatsan Srinivasan, Pavlos Protopapas, Chris Tanner
# RUN THIS CELL TO PROPERLY HIGHLIGHT THE EXERCISES
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2019-CS109B/master/content/styles/cs109.css").text
HTML(styles)
Learning Goals¶
In this lab we will look at Recurrent Neural Networks (RNNs), LSTMs and their building blocks.
By the end of this lab, you should:
- know how to put together the building blocks used in RNNs and its variants (GRU, LSTM) in
keras
with an example. - have a good undertanding on how sequences -- any data that has some temporal semantics (e.g., time series, natural language, images etc.) -- fit into and benefit from a recurrent architecture
- be familiar with preprocessing text and dynamic embeddings
- be familiar with gradient issues on RNNs processing longer sentence lengths
- understand different kinds of LSTM architectures, classifiers, sequence to sequence models and their far-reaching applications
1. IMDb Review Classification: Feedforward, CNN, RNN, LSTM¶
In this task, we are going to do sentiment classification on a movie review dataset. We are going to build a feedforward net, a convolutional neural net, a recurrent net and combine one or more of them to understand performance of each of them. A sentence can be thought of as a sequence of words that collectively represent meaning. Individual words impact the meaning. Thus, the context matters; words that occur earlier in the sentence influence the sentence's structure and meaning in the latter part of the sentence (e.g., Jose asked Anqi if she were going to the library today). Likewise, words that occur later in a sentence can affect the meaning of earlier words (e.g., Apple is an interesting company). As we have seen in lecture, if we wish to make use of a full sentence's context in both directions, then we should use a bi-directional RNN (e.g., Bi-LSTM). For the purpose of this tutorial, we are going to restrict ourselves to only uni-directional RNNs.
import numpy
from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM, SimpleRNN
from keras.layers.embeddings import Embedding
from keras.layers import Flatten
from keras.preprocessing import sequence
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D
from keras.layers.embeddings import Embedding
import numpy as np
# fix random seed for reproducibility
numpy.random.seed(1)
# We want to have a finite vocabulary to make sure that our word matrices are not arbitrarily small
vocabulary_size = 10000
# We also want to have a finite length of reviews and not have to process really long sentences.
max_review_length = 500
Computers have no built-in knowledge of words or their meanings and cannot understand them in any rich way that humans do -- hence, the purpose of Natural Language Processing (NLP). As with any data science, computer science, machine learning task, the first crucial step is to clean (pre-process) your data so that you can soundly make use of it. Within NLP, this first step is called Tokenization and it concerns how to represent each token (a.k.a. word) of your corpus (i.e., dataset).
TOKENIZATION¶
A token
refers to a single, atomic unit of meaning (i.e., a word). How should our computers represent each word? We could read in our corpus word by word and store each word as a String (data structure). However, Strings tend to use more computer memory than Integers and can become cumbersome. As long as we preserve the uniqueness of the tokens and are consistent, we are better off converting each distinct word to a distinct number (Integer). This is standard practice within NLP / computer science / data science, etc.
As a simple example of tokenization, we can see a small example.
If the five sentences below were our entire corpus, our conversion would look as follows:
- i have books - [1, 4, 7]
- interesting books are useful [10,2,9,8]
- i have computers [1,4,6]
- computers are interesting and useful [6,9,11,10,8]
- books and computers are both valuable. [2,10,2,9,13,12]
- Bye Bye [7,7]
Create tokens for vocabulary based on frequency of occurrence. Hence, we assign the following tokens
I-1, books-2, computers-3, have-4, are-5, computers-6,bye-7, useful-8, are-9, and-10,interesting-11, valuable-12, both-13
Thankfully, our dataset is already represented in such a tokenized form.
NOTE: Often times, depending on your NLP task, it is useful to also perform other pre-processing, cleaning steps, such as:
- treating each punctuation mark as a token (e.g., , . ! ? are each separate tokens)
- lower-casing all words (so that a given word isn't treated differently just because it starts a sentence or not)
- separating each sentence with a unique symbol (e.g.,
and) - removing words that are incredibly common (e.g., function words, (in)definite articles). These are referred to as 'stopwords'). For language modelling, we DO NOT remove stopwords. A sentence's meaning needs to include all of the original words.
Load data¶
(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=vocabulary_size)
print('Number of reviews', len(X_train))
print('Length of first and fifth review before padding', len(X_train[0]) ,len(X_train[4]))
print('First review', X_train[0])
print('First label', y_train[0])
Preprocess data¶
If we were training our RNN one sentence at a time, it would be okay to have sentences of varying lengths. However, as with any neural network, it can be sometimes be advantageous to train inputs in batches. When doing so with RNNs, our input tensors need to be of the same length/dimensions. Thus, let's pad our sentences.
X_train = sequence.pad_sequences(X_train, maxlen=max_review_length)
X_test = sequence.pad_sequences(X_test, maxlen=max_review_length)
print('Length of first and fifth review after padding', len(X_train[0]) ,len(X_train[4]))
MODEL 1A : FEED-FORWARD NETWORKS WITHOUT EMBEDDINGS¶
Let us build a single-layer feed-forward net with a hidden layer of 250 nodes. Each input would be a 500-dim vector of tokens since we padded all our sequences to size 500.
model = Sequential()
model.add(Dense(250, activation='relu',input_dim=max_review_length))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=128, verbose=2)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
MODEL 1B : FEED-FORWARD NETWORKS WITH EMBEDDINGS¶
What is an embedding layer ?¶
An embedding is a "distributed representation" (e.g., vector) of a particular atomic item (e.g., word token, object, etc). When representing items by embeddings:
- each distinct item should be represented by its own unique embedding
- the semantic similarity between items should correspond to the similarity between their respective embeddings (i.e., words that are more similar to one another should have embeddings that are more similar to each other).
There are essentially an infinite number of ways to create such embeddings, and since these representations have such a great influence on the performance of our models, there has been an incredible amount of research dedicated to this very aspect. If you are interested in learning more, start with the astromonically impactful papers of word2vec and GloVe.
In general, though, one can view the embedding process as a linear projection from one vector space to another (e.g., a vector space of unique words being mapped to a world of fixed-length, dense vectors filled with continuous-valued numbers. For NLP, we usually use embeddings to project the one-hot encodings of words (i.e., a vector that is the length of the entire vocabulary, and it is filled with all zeros except for a single value of 1 that corresponds to the particular word) on to a lower-dimensional continuous space (e.g., vectors of size 100) so that the input surface is dense and possibly smooth. Thus, one can view this embedding layer process as just a transformation from $\mathbb{R}^{inp}$ to $\mathbb{R}^{emb}$
embedding_dim = 100
model = Sequential()
# inputs will be converted from batch_size * sentence_length to batch_size*sentence_length*embedding _dim
model.add(Embedding(vocabulary_size, embedding_dim, input_length=max_review_length))
model.add(Flatten())
model.add(Dense(250, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
# fit the model
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=2, batch_size=128, verbose=2)
# evaluate the model on the test set
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
MODEL 2 : CONVOLUTIONAL NEURAL NETWORKS (CNN)¶
Text can be thought of as 1-dimensional sequence (a single, long vector) and we can apply 1D Convolutions over a set of word embeddings. Let us walk through convolutions on text data with this blog. If you want to learn more, read this published and well-cited paper from my friend, Byron Wallace.
Fit a 1D convolution with 200 filters, kernel size 3, followed by a feed-forward layer of 250 nodes, and ReLU and Sigmoid activations as appropriate.
# %load sol2.py
# create the CNN
model = Sequential()
model.add(Embedding(vocabulary_size, embedding_dim, input_length=max_review_length))
model.add(Conv1D(filters=200, kernel_size=3, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(250, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
# train the CNN
model.fit(X_train, y_train, epochs=2, batch_size=128)
# evalute the CNN
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
MODEL 3 : Simple RNN¶
Two great blogs that are helpful for understanding the workings of a RNN and LSTM are
- http://karpathy.github.io/2015/05/21/rnn-effectiveness/
- http://colah.github.io/posts/2015-08-Understanding-LSTMs/
At a high-level, an RNN is similar to a feed-forward neural network (FFNN) in that there is an input layer, a hidden layer, and an output layer. The input layer is fully connected to the hidden layer, and the hidden layer is fully connected to the output layer. However, the crux of what makes it a recurrent neural network is that the hidden layer for a given time t is not only based on the input layer at time t but also the hidden layer from time t-1.
Mathematically, a simpleRNN can be defined by the following recurrence relation.
If we extend this recurrence relation to the length of sequences we have in hand, our RNN architecture can be viewed as follows (this is also referred to as 'unrolling' the network):
# %load sol3.py
model = Sequential()
model.add(Embedding(vocabulary_size, embedding_dim, input_length=max_review_length))
model.add(SimpleRNN(100))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(X_train, y_train, epochs=20, batch_size=16)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
RNNs and vanishing/exploding gradients¶
Let us use sigmoid activations as example. Derivative of a sigmoid can be written as
Remember, an RNN is a "really deep" feed-forward-esque network (when unrolled in time). Hence, backpropagation happens from $h_t$ all the way to $h_1$. Also realize that sigmoid gradients are multiplicatively dependent on the value of sigmoid. Hence, if the non-activated output of any layer $h_l$ is < 0, then $\sigma$ tends to 0, effectively "vanishing" the gradient. Any layer that the current layer backprops to $H_{1:L-1}$ does not learn anything useful out of the gradients.
LSTMs and GRU¶
LSTM and GRU are two sophisticated implementations of RNNs that have gates (one could say that their success hinges on using gates). A gate emits probability between 0 and 1. For instance, LSTM is built on these state updates:
$L$ is a linear transformation $L(x) = W*x + b.$
$f_t = \sigma(L([h_{t-1},x_t))$
$i_t = \sigma(L([h_{t-1},x_t))$
$o_t = \sigma(L([h_{t-1},x_t))$
$\hat{C}_t = \tanh(L([h_{t-1},x_t))$
$C_t = f_t * C_{t-1}+i_t*\hat{C}_t$ (Using the forget gate, the neural network can learn to control how much information it has to retain or forget)
$h_t = o_t * \tanh(c_t)$
MODEL 4 : LSTM¶
Now, let's use an LSTM model to do classification! To make it a fair comparison to the SimpleRNN, let's start with the same architecture hyper-parameters (e.g., number of hidden nodes, epochs, and batch size). Then, experiment with increasing the number of nodes, stacking multiple layers, applying dropouts etc. Check the number of parameters that this model entails.
# %load sol4.py
model = Sequential()
model.add(Embedding(vocabulary_size, embedding_dim, input_length=max_review_length))
model.add(LSTM(100))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(X_train, y_train, epochs=3, batch_size=64)
# evaluation of the LSTM's performance
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
MODEL 5 : CNN + LSTM¶
CNNs are good at learning spatial features, and sentences can be thought of as 1-D spatial vectors (dimensionality is determined by the number of words in the sentence). We apply an LSTM over the features learned by the CNN (after a maxpooling layer). This leverages the power of CNNs and LSTMs combined! We expect the CNN to be able to pick out invariant features across the 1-D spatial structure (i.e., sentence) that characterize good and bad sentiment. This learned spatial features may then be learned as sequences by an LSTM layer, and the final classification can be made via a feed-forward connection to a single node.
model = Sequential()
model.add(Embedding(vocabulary_size, embedding_dim, input_length=max_review_length))
model.add(Conv1D(filters=32, kernel_size=3, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(LSTM(100))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(X_train, y_train, epochs=3, batch_size=64)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
CONCLUSION¶
We saw the power of sequence models and how they are useful in text classification. They give a solid performance, low memory footprint (thanks to shared parameters) and are able to understand and leverage the temporally connected information contained in the inputs. There is still an open debate about the performance vs memory benefits of CNNs vs RNNs in the research community.
As a side-note and bit of history: how amazing is it that we can construct these neural networks, train them, and evaluate them in just a few lines of code?! Imagine if we didn't have deep learning libraries like Keras and Tensorflow; we'd have to write backpropagation and gradient descent by hand. Our last network could easily require thousands of lines of code and many hours of debugging. This is what many people did just 8 years ago, since deep learning wasn't common and the community hadn't yet written nicely packaged libraries. Many libraries have come and gone, but nowadays most people use either Tensorflow/Keras (by Google) or PyTorch (by Facebook). I expect them to remain as great libraries for the foreseeable future, so if you're interested in deep learning, it's worth the investment to learn one, or both, of them well.
2. 231+432 = 665.... It's not ? Let's ask our LSTM¶
In this exercise, we are going to teach addition to our model. Given two numbers (<999), the model outputs their sum (<9999). The input is provided as a string '231+432' and the model will provide its output as ' 663' (Here the empty space is the padding character). We are not going to use any external dataset and are going to construct our own dataset for this exercise.
The exercise we attempt to do effectively "translates" a sequence of characters '231+432' to another sequence of characters ' 663' and hence, this class of models are called sequence-to-sequence models (aka seq2seq). Such architectures have profound applications in several real-life tasks such as machine translation, summarization, image captioning etc.
To be clear, sequence-to-sequence (aka seq2seq) models take as input a sequence of length N and return a sequence of length M, where N and M may or may not differ, and every single observation/input may be of different values, too. For example, machine translation concerns converting text from one natural language to another (e.g., translating English to French). Google Translate is an example, and their system is a seq2seq model. The input (e.g., an English sentence) can be of any length, and the output (e.g., a French sentence) may be of any length.
Background knowledge: The earliest and most simple seq2seq model works by having one RNN for the input, just like we've always done, and we refer to it as being an "encoder." The final hidden state of the encoder RNN is fed as input to another RNN that we refer to as the "decoder." The job of the decoder is to generate each token, one word at a time. This may seem really limiting, as it relies on the encoder encapsulating the entire input sequence with just 1 hidden layer. It seems unrealistic that we could encode an entire meaning of a sentence with just one hidden layer. Yet, results even in this simplistic manner can be quite impressive. In fact, these early results were compelling enough that these models immediately replaced the decades of earlier machine translation work.
from __future__ import print_function
from keras.models import Sequential
from keras import layers
from keras.layers import Dense, RepeatVector, TimeDistributed
import numpy as np
from six.moves import range
The less interesting data generation and preprocessing¶
class CharacterTable(object):
def __init__(self, chars):
self.chars = sorted(set(chars))
self.char_indices = dict((c, i) for i, c in enumerate(self.chars))
self.indices_char = dict((i, c) for i, c in enumerate(self.chars))
# converts a String of characters into a one-hot embedding/vector
def encode(self, C, num_rows):
x = np.zeros((num_rows, len(self.chars)))
for i, c in enumerate(C):
x[i, self.char_indices[c]] = 1
return x
# converts a one-hot embedding/vector into a String of characters
def decode(self, x, calc_argmax=True):
if calc_argmax:
x = x.argmax(axis=-1)
return ''.join(self.indices_char[x] for x in x)
TRAINING_SIZE = 50000
DIGITS = 3
MAXOUTPUTLEN = DIGITS + 1
MAXLEN = DIGITS + 1 + DIGITS
chars = '0123456789+ '
ctable = CharacterTable(chars)
def return_random_digit():
return np.random.choice(list('0123456789'))
# generate a new number of length `DIGITS`
def generate_number():
num_digits = np.random.randint(1, DIGITS + 1)
return int(''.join( return_random_digit()
for i in range(num_digits)))
# generate `TRAINING_SIZE` # of pairs of random numbers
def data_generate(num_examples):
questions = []
answers = []
seen = set()
print('Generating data...')
while len(questions) < TRAINING_SIZE:
a, b = generate_number(), generate_number()
# don't allow duplicates; this is good practice for training,
# as we will minimize memorizing seen examples
key = tuple(sorted((a, b)))
if key in seen:
continue
seen.add(key)
# pad the data with spaces so that the length is always MAXLEN.
q = '{}+{}'.format(a, b)
query = q + ' ' * (MAXLEN - len(q))
ans = str(a + b)
# answers can be of maximum size DIGITS + 1.
ans += ' ' * (MAXOUTPUTLEN - len(ans))
questions.append(query)
answers.append(ans)
print('Total addition questions:', len(questions))
return questions, answers
def encode_examples(questions, answers):
x = np.zeros((len(questions), MAXLEN, len(chars)), dtype=np.bool)
y = np.zeros((len(questions), DIGITS + 1, len(chars)), dtype=np.bool)
for i, sentence in enumerate(questions):
x[i] = ctable.encode(sentence, MAXLEN)
for i, sentence in enumerate(answers):
y[i] = ctable.encode(sentence, DIGITS + 1)
indices = np.arange(len(y))
np.random.shuffle(indices)
return x[indices],y[indices]
q,a = data_generate(TRAINING_SIZE)
x,y = encode_examples(q,a)
# divides our data into training and validation
split_at = len(x) - len(x) // 10
x_train, x_val, y_train, y_val = x[:split_at], x[split_at:],y[:split_at],y[split_at:]
print('Training Data shape:')
print('X : ', x_train.shape)
print('Y : ', y_train.shape)
print('Sample Question(in encoded form) : ', x_train[0], y_train[0])
print('Sample Question(in decoded form) : ', ctable.decode(x_train[0]),'Sample Output : ', ctable.decode(y_train[0]))
Let's learn two wrapper functions in Keras - TimeDistributed and RepeatVector with some dummy examples.¶
TimeDistributed is a wrapper function call that applies an input operation on all the timesteps of an input data. For instance, if I have a feed-forward network which converts a 10-dim vector to a 5-dim vector, then wrapping this TimeDistributed layer on that feed-forward operation would convert a batch_size * sentence_len * vector_len(=10) to batch_size * sentence_len * output_len(=5)
model = Sequential()
#Inputs to it will be batch_size*time_steps*input_vector_dim(to Dense)
# Output will be batch_size*time_steps* output_vector_dim
# Here, Dense() converts a 5-dim input vector to a 8-dim vector.
model.add(TimeDistributed(Dense(8), input_shape=(3, 5)))
input_array = np.random.randint(10, size=(1,3,5))
print("Shape of input : ", input_array.shape)
model.compile('rmsprop', 'mse')
output_array = model.predict(input_array)
print("Shape of output : ", output_array.shape)
RepeatVector repeats the vector a specified number of times. Dimension changes from batch_size number of elements to batch_size number of repetitions * number of elements.
model = Sequential()
# converts from 1*10 to 1*6
model.add(Dense(6, input_dim=10))
print(model.output_shape)
# converts from 1*6 to 1*3*6
model.add(RepeatVector(3))
print(model.output_shape)
input_array = np.random.randint(1000, size=(1, 10))
print("Shape of input : ", input_array.shape)
model.compile('rmsprop', 'mse')
output_array = model.predict(input_array)
print("Shape of output : ", output_array.shape)
# note: `None` is the batch dimension
print('Input : ', input_array[0])
print('Output : ', output_array[0])
MODEL ARCHITECTURE¶
Note: Whenever you are initializing a LSTM in Keras, by the default the option return_sequences = False
. This means that at the end of the step the next component will only get to see the final hidden layer's values. On the other hand, if you set return_sequences = True
, the LSTM component will return the hidden layer at each time step. It means that the next component should be able to consume inputs in that form.
Think how this statement is relevant in terms of this model architecture and the TimeDistributed module we just learned.
Build an encoder and decoder both single layer 128 nodes and an appropriate dense layer as needed by the model.
# Hyperaparams
RNN = layers.LSTM
HIDDEN_SIZE = 128
BATCH_SIZE = 128
LAYERS = 1
print('Build model...')
model = Sequential()
#ENCODING
model.add(RNN(HIDDEN_SIZE, input_shape=(MAXLEN, len(chars))))
model.add(RepeatVector(MAXOUTPUTLEN))
#DECODING
for _ in range(LAYERS):
# return hidden layer at each time step
model.add(RNN(HIDDEN_SIZE, return_sequences=True))
model.add(TimeDistributed(layers.Dense(len(chars), activation='softmax')))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.summary()
Let's check how well our model trained.
for iteration in range(1, 2):
print()
model.fit(x_train, y_train,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(x_val, y_val))
# Select 10 samples from the validation set at random so
# we can visualize errors.
print('Finished iteration ', iteration)
numcorrect = 0
numtotal = 20
for i in range(numtotal):
ind = np.random.randint(0, len(x_val))
rowx, rowy = x_val[np.array([ind])], y_val[np.array([ind])]
preds = model.predict_classes(rowx, verbose=0)
q = ctable.decode(rowx[0])
correct = ctable.decode(rowy[0])
guess = ctable.decode(preds[0], calc_argmax=False)
print('Question', q, end=' ')
print('True', correct, end=' ')
print('Guess', guess, end=' ')
if guess == correct :
print('Good job')
numcorrect += 1
else:
print('Fail')
print('The model scored ', numcorrect*100/numtotal,' % in its test.')
EXERCISE¶
Try changing the hyperparams, use other RNNs, more layers, check if increasing the number of epochs is useful.
Try reversing the data from validation set and check if commutative property of addition is learned by the model.
- Try printing the hidden layer with two inputs that are commutative and check if the hidden representations it learned are same or similar. Do we expect it to be true? If so, why? If not why? You can access the layer using an index with model.layers and layer.output will give the output of that layer.
- Try doing addition in the RNN model the same way we do by hand. Reverse the order of digits and at each time step, input two digits get an output use the hidden layer and input next two digits and so on.(units in the first time step, tens in the second time step etc.)