Key Word(s): RNN, LSTM, Keras


CS-109B Advanced Data Science

Lab 8: Recurrent Neural Networks

Harvard University
Fall 2020
Instructors: Mark Glickman, Pavlos Protopapas, and Chris Tanner
Lab Instructors: Chris Tanner and Eleni Angelaki Kaxiras
Content: Srivatsan Srinivasan, Pavlos Protopapas, Chris Tanner

In [1]:
# RUN THIS CELL TO PROPERLY HIGHLIGHT THE EXERCISES
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2019-CS109B/master/content/styles/cs109.css").text
HTML(styles)
Out[1]:

Learning Goals

In this lab we will look at Recurrent Neural Networks (RNNs), LSTMs and their building blocks.

By the end of this lab, you should:

  • know how to put together the building blocks used in RNNs and its variants (GRU, LSTM) in keras with an example.
  • have a good undertanding on how sequences -- any data that has some temporal semantics (e.g., time series, natural language, images etc.) -- fit into and benefit from a recurrent architecture
  • be familiar with preprocessing text and dynamic embeddings
  • be familiar with gradient issues on RNNs processing longer sentence lengths
  • understand different kinds of LSTM architectures, classifiers, sequence to sequence models and their far-reaching applications

1. IMDb Review Classification: Feedforward, CNN, RNN, LSTM

In this task, we are going to do sentiment classification on a movie review dataset. We are going to build a feedforward net, a convolutional neural net, a recurrent net and combine one or more of them to understand performance of each of them. A sentence can be thought of as a sequence of words that collectively represent meaning. Individual words impact the meaning. Thus, the context matters; words that occur earlier in the sentence influence the sentence's structure and meaning in the latter part of the sentence (e.g., Jose asked Anqi if she were going to the library today). Likewise, words that occur later in a sentence can affect the meaning of earlier words (e.g., Apple is an interesting company). As we have seen in lecture, if we wish to make use of a full sentence's context in both directions, then we should use a bi-directional RNN (e.g., Bi-LSTM). For the purpose of this tutorial, we are going to restrict ourselves to only uni-directional RNNs.

In [3]:
import numpy
from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM, SimpleRNN
from keras.layers.embeddings import Embedding
from keras.layers import Flatten
from keras.preprocessing import sequence
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D
from keras.layers.embeddings import Embedding
import numpy as np
# fix random seed for reproducibility
numpy.random.seed(1)
In [4]:
# We want to have a finite vocabulary to make sure that our word matrices are not arbitrarily small
vocabulary_size = 10000

# We also want to have a finite length of reviews and not have to process really long sentences.
max_review_length = 500

Computers have no built-in knowledge of words or their meanings and cannot understand them in any rich way that humans do -- hence, the purpose of Natural Language Processing (NLP). As with any data science, computer science, machine learning task, the first crucial step is to clean (pre-process) your data so that you can soundly make use of it. Within NLP, this first step is called Tokenization and it concerns how to represent each token (a.k.a. word) of your corpus (i.e., dataset).

TOKENIZATION

A token refers to a single, atomic unit of meaning (i.e., a word). How should our computers represent each word? We could read in our corpus word by word and store each word as a String (data structure). However, Strings tend to use more computer memory than Integers and can become cumbersome. As long as we preserve the uniqueness of the tokens and are consistent, we are better off converting each distinct word to a distinct number (Integer). This is standard practice within NLP / computer science / data science, etc. As a simple example of tokenization, we can see a small example.

If the five sentences below were our entire corpus, our conversion would look as follows:

  1. i have books - [1, 4, 7]
  2. interesting books are useful [10,2,9,8]
  3. i have computers [1,4,6]
  4. computers are interesting and useful [6,9,11,10,8]
  5. books and computers are both valuable. [2,10,2,9,13,12]
  6. Bye Bye [7,7]

Create tokens for vocabulary based on frequency of occurrence. Hence, we assign the following tokens

I-1, books-2, computers-3, have-4, are-5, computers-6,bye-7, useful-8, are-9, and-10,interesting-11, valuable-12, both-13

Thankfully, our dataset is already represented in such a tokenized form.

NOTE: Often times, depending on your NLP task, it is useful to also perform other pre-processing, cleaning steps, such as:

  • treating each punctuation mark as a token (e.g., , . ! ? are each separate tokens)
  • lower-casing all words (so that a given word isn't treated differently just because it starts a sentence or not)
  • separating each sentence with a unique symbol (e.g., and )
  • removing words that are incredibly common (e.g., function words, (in)definite articles). These are referred to as 'stopwords'). For language modelling, we DO NOT remove stopwords. A sentence's meaning needs to include all of the original words.

Load data

In [10]:
(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=vocabulary_size)
print('Number of reviews', len(X_train))
print('Length of first and fifth review before padding', len(X_train[0]) ,len(X_train[4]))
print('First review', X_train[0])
print('First label', y_train[0])
[[   0    0    0 ...   19  178   32]
 [   0    0    0 ...   16  145   95]
 [   0    0    0 ...    7  129  113]
 [ 687   23    4 ...   21   64 2574]]
Number of reviews 25000
Length of first and fifth review before padding 500 500
First review [   0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    1   14   22   16   43  530  973 1622 1385   65  458 4468
   66 3941    4  173   36  256    5   25  100   43  838  112   50  670
    2    9   35  480  284    5  150    4  172  112  167    2  336  385
   39    4  172 4536 1111   17  546   38   13  447    4  192   50   16
    6  147 2025   19   14   22    4 1920 4613  469    4   22   71   87
   12   16   43  530   38   76   15   13 1247    4   22   17  515   17
   12   16  626   18    2    5   62  386   12    8  316    8  106    5
    4 2223 5244   16  480   66 3785   33    4  130   12   16   38  619
    5   25  124   51   36  135   48   25 1415   33    6   22   12  215
   28   77   52    5   14  407   16   82    2    8    4  107  117 5952
   15  256    4    2    7 3766    5  723   36   71   43  530  476   26
  400  317   46    7    4    2 1029   13  104   88    4  381   15  297
   98   32 2071   56   26  141    6  194 7486   18    4  226   22   21
  134  476   26  480    5  144   30 5535   18   51   36   28  224   92
   25  104    4  226   65   16   38 1334   88   12   16  283    5   16
 4472  113  103   32   15   16 5345   19  178   32]
First label 1

Preprocess data

If we were training our RNN one sentence at a time, it would be okay to have sentences of varying lengths. However, as with any neural network, it can be sometimes be advantageous to train inputs in batches. When doing so with RNNs, our input tensors need to be of the same length/dimensions. Thus, let's pad our sentences.

In [8]:
X_train = sequence.pad_sequences(X_train, maxlen=max_review_length)
X_test = sequence.pad_sequences(X_test, maxlen=max_review_length)
print('Length of first and fifth review after padding', len(X_train[0]) ,len(X_train[4]))
Length of first and fifth review after padding 500 500

MODEL 1A : FEED-FORWARD NETWORKS WITHOUT EMBEDDINGS

Let us build a single-layer feed-forward net with a hidden layer of 250 nodes. Each input would be a 500-dim vector of tokens since we padded all our sequences to size 500.


EXERCISE: Calculate the number of parameters involved in this network and implement a feedforward net to do classification without looking at cells below.
In [11]:
model = Sequential()

model.add(Dense(250, activation='relu',input_dim=max_review_length))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())

model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=128, verbose=2)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
Model: "sequential_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_3 (Dense)              (None, 250)               125250    
_________________________________________________________________
dense_4 (Dense)              (None, 1)                 251       
=================================================================
Total params: 125,501
Trainable params: 125,501
Non-trainable params: 0
_________________________________________________________________
None
Train on 25000 samples, validate on 25000 samples
Epoch 1/10
 - 1s - loss: 176.8025 - accuracy: 0.5048 - val_loss: 98.0177 - val_accuracy: 0.4998
Epoch 2/10
 - 1s - loss: 49.7650 - accuracy: 0.5830 - val_loss: 52.1167 - val_accuracy: 0.5024
Epoch 3/10
 - 1s - loss: 17.7281 - accuracy: 0.6645 - val_loss: 32.8438 - val_accuracy: 0.5027
Epoch 4/10
 - 1s - loss: 7.7892 - accuracy: 0.7189 - val_loss: 21.9369 - val_accuracy: 0.5012
Epoch 5/10
 - 1s - loss: 3.8072 - accuracy: 0.7672 - val_loss: 16.1270 - val_accuracy: 0.5046
Epoch 6/10
 - 1s - loss: 2.2670 - accuracy: 0.7916 - val_loss: 11.6964 - val_accuracy: 0.5064
Epoch 7/10
 - 1s - loss: 1.4903 - accuracy: 0.8087 - val_loss: 9.6540 - val_accuracy: 0.5030
Epoch 8/10
 - 2s - loss: 1.2193 - accuracy: 0.8132 - val_loss: 9.1117 - val_accuracy: 0.5051
Epoch 9/10
 - 1s - loss: 0.9770 - accuracy: 0.8321 - val_loss: 8.4030 - val_accuracy: 0.5035
Epoch 10/10
 - 1s - loss: 0.7273 - accuracy: 0.8542 - val_loss: 7.8003 - val_accuracy: 0.5035
Accuracy: 50.35%
Discussion: Why was the performance bad? What was wrong with tokenization?

MODEL 1B : FEED-FORWARD NETWORKS WITH EMBEDDINGS

What is an embedding layer ?

An embedding is a "distributed representation" (e.g., vector) of a particular atomic item (e.g., word token, object, etc). When representing items by embeddings:

  • each distinct item should be represented by its own unique embedding
  • the semantic similarity between items should correspond to the similarity between their respective embeddings (i.e., words that are more similar to one another should have embeddings that are more similar to each other).

There are essentially an infinite number of ways to create such embeddings, and since these representations have such a great influence on the performance of our models, there has been an incredible amount of research dedicated to this very aspect. If you are interested in learning more, start with the astromonically impactful papers of word2vec and GloVe.

In general, though, one can view the embedding process as a linear projection from one vector space to another (e.g., a vector space of unique words being mapped to a world of fixed-length, dense vectors filled with continuous-valued numbers. For NLP, we usually use embeddings to project the one-hot encodings of words (i.e., a vector that is the length of the entire vocabulary, and it is filled with all zeros except for a single value of 1 that corresponds to the particular word) on to a lower-dimensional continuous space (e.g., vectors of size 100) so that the input surface is dense and possibly smooth. Thus, one can view this embedding layer process as just a transformation from $\mathbb{R}^{inp}$ to $\mathbb{R}^{emb}$

In [16]:
embedding_dim = 100
In [14]:
model = Sequential()

# inputs will be converted from batch_size * sentence_length to batch_size*sentence_length*embedding _dim
model.add(Embedding(vocabulary_size, embedding_dim, input_length=max_review_length))
model.add(Flatten())
model.add(Dense(250, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
Model: "sequential_3"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_1 (Embedding)      (None, 500, 100)          1000000   
_________________________________________________________________
flatten_1 (Flatten)          (None, 50000)             0         
_________________________________________________________________
dense_5 (Dense)              (None, 250)               12500250  
_________________________________________________________________
dense_6 (Dense)              (None, 1)                 251       
=================================================================
Total params: 13,500,501
Trainable params: 13,500,501
Non-trainable params: 0
_________________________________________________________________
None
In [15]:
# fit the model
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=2, batch_size=128, verbose=2)

# evaluate the model on the test set
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
/usr/local/lib/python3.7/site-packages/tensorflow_core/python/framework/indexed_slices.py:424: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
Train on 25000 samples, validate on 25000 samples
Epoch 1/2
 - 52s - loss: 0.4555 - accuracy: 0.7728 - val_loss: 0.2868 - val_accuracy: 0.8772
Epoch 2/2
 - 50s - loss: 0.1172 - accuracy: 0.9584 - val_loss: 0.3433 - val_accuracy: 0.8594
Accuracy: 85.94%

MODEL 2 : CONVOLUTIONAL NEURAL NETWORKS (CNN)

Text can be thought of as 1-dimensional sequence (a single, long vector) and we can apply 1D Convolutions over a set of word embeddings. Let us walk through convolutions on text data with this blog. If you want to learn more, read this published and well-cited paper from my friend, Byron Wallace.

Fit a 1D convolution with 200 filters, kernel size 3, followed by a feed-forward layer of 250 nodes, and ReLU and Sigmoid activations as appropriate.

In [18]:
# %load sol2.py
# create the CNN
model = Sequential()
model.add(Embedding(vocabulary_size, embedding_dim, input_length=max_review_length))
model.add(Conv1D(filters=200, kernel_size=3, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(250, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())

# train the CNN
model.fit(X_train, y_train, epochs=2, batch_size=128)

# evalute the CNN
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
Model: "sequential_4"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_2 (Embedding)      (None, 500, 100)          1000000   
_________________________________________________________________
conv1d_1 (Conv1D)            (None, 500, 200)          60200     
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 250, 200)          0         
_________________________________________________________________
flatten_2 (Flatten)          (None, 50000)             0         
_________________________________________________________________
dense_7 (Dense)              (None, 250)               12500250  
_________________________________________________________________
dense_8 (Dense)              (None, 1)                 251       
=================================================================
Total params: 13,560,701
Trainable params: 13,560,701
Non-trainable params: 0
_________________________________________________________________
None
/usr/local/lib/python3.7/site-packages/tensorflow_core/python/framework/indexed_slices.py:424: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
Epoch 1/2
15104/25000 [=================>............] - ETA: 38s - loss: 0.7021 - accuracy: 0.5011
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
 in 
     12 
     13 # train the CNN
---> 14 model.fit(X_train, y_train, epochs=2, batch_size=128)
     15 
     16 # evalute the CNN

/usr/local/lib/python3.7/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
   1237                                         steps_per_epoch=steps_per_epoch,
   1238                                         validation_steps=validation_steps,
-> 1239                                         validation_freq=validation_freq)
   1240 
   1241     def evaluate(self,

/usr/local/lib/python3.7/site-packages/keras/engine/training_arrays.py in fit_loop(model, fit_function, fit_inputs, out_labels, batch_size, epochs, verbose, callbacks, val_function, val_inputs, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq)
    194                     ins_batch[i] = ins_batch[i].toarray()
    195 
--> 196                 outs = fit_function(ins_batch)
    197                 outs = to_list(outs)
    198                 for l, o in zip(out_labels, outs):

/usr/local/lib/python3.7/site-packages/tensorflow_core/python/keras/backend.py in __call__(self, inputs)
   3738         value = math_ops.cast(value, tensor.dtype)
   3739       converted_inputs.append(value)
-> 3740     outputs = self._graph_fn(*converted_inputs)
   3741 
   3742     # EagerTensor.numpy() will often make a copy to ensure memory safety.

/usr/local/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in __call__(self, *args, **kwargs)
   1079       TypeError: For invalid positional/keyword argument combinations.
   1080     """
-> 1081     return self._call_impl(args, kwargs)
   1082 
   1083   def _call_impl(self, args, kwargs, cancellation_manager=None):

/usr/local/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _call_impl(self, args, kwargs, cancellation_manager)
   1119       raise TypeError("Keyword arguments {} unknown. Expected {}.".format(
   1120           list(kwargs.keys()), list(self._arg_keywords)))
-> 1121     return self._call_flat(args, self.captured_inputs, cancellation_manager)
   1122 
   1123   def _filtered_call(self, args, kwargs):

/usr/local/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _call_flat(self, args, captured_inputs, cancellation_manager)
   1222     if executing_eagerly:
   1223       flat_outputs = forward_function.call(
-> 1224           ctx, args, cancellation_manager=cancellation_manager)
   1225     else:
   1226       gradient_name = self._delayed_rewrite_functions.register()

/usr/local/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in call(self, ctx, args, cancellation_manager)
    509               inputs=args,
    510               attrs=("executor_type", executor_type, "config_proto", config),
--> 511               ctx=ctx)
    512         else:
    513           outputs = execute.execute_with_cancellation(

/usr/local/lib/python3.7/site-packages/tensorflow_core/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
     59     tensors = pywrap_tensorflow.TFE_Py_Execute(ctx._handle, device_name,
     60                                                op_name, inputs, attrs,
---> 61                                                num_outputs)
     62   except core._NotOkStatusException as e:
     63     if name is not None:

KeyboardInterrupt: 

MODEL 3 : Simple RNN

Two great blogs that are helpful for understanding the workings of a RNN and LSTM are

  1. http://karpathy.github.io/2015/05/21/rnn-effectiveness/
  2. http://colah.github.io/posts/2015-08-Understanding-LSTMs/

At a high-level, an RNN is similar to a feed-forward neural network (FFNN) in that there is an input layer, a hidden layer, and an output layer. The input layer is fully connected to the hidden layer, and the hidden layer is fully connected to the output layer. However, the crux of what makes it a recurrent neural network is that the hidden layer for a given time t is not only based on the input layer at time t but also the hidden layer from time t-1.

Mathematically, a simpleRNN can be defined by the following recurrence relation.

$h_t = \sigma(W([h_{t-1},x_{t}])+b)$

If we extend this recurrence relation to the length of sequences we have in hand, our RNN architecture can be viewed as follows (this is also referred to as 'unrolling' the network):

In [20]:
# %load sol3.py
model = Sequential()
model.add(Embedding(vocabulary_size, embedding_dim, input_length=max_review_length))
model.add(SimpleRNN(100))

model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())

model.fit(X_train, y_train, epochs=20, batch_size=16)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
Model: "sequential_5"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_3 (Embedding)      (None, 500, 100)          1000000   
_________________________________________________________________
simple_rnn_1 (SimpleRNN)     (None, 100)               20100     
_________________________________________________________________
dense_9 (Dense)              (None, 1)                 101       
=================================================================
Total params: 1,020,201
Trainable params: 1,020,201
Non-trainable params: 0
_________________________________________________________________
None
/usr/local/lib/python3.7/site-packages/tensorflow_core/python/framework/indexed_slices.py:424: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
Epoch 1/3
25000/25000 [==============================] - 75s 3ms/step - loss: 0.6572 - accuracy: 0.5923
Epoch 2/3
25000/25000 [==============================] - 59s 2ms/step - loss: 0.5133 - accuracy: 0.7533
Epoch 3/3
25000/25000 [==============================] - 66s 3ms/step - loss: 0.4463 - accuracy: 0.8013
Accuracy: 71.55%

RNNs and vanishing/exploding gradients

Let us use sigmoid activations as example. Derivative of a sigmoid can be written as

$\sigma'(x) = \sigma(x) \cdot \sigma(1-x)$.


Remember, an RNN is a "really deep" feed-forward-esque network (when unrolled in time). Hence, backpropagation happens from $h_t$ all the way to $h_1$. Also realize that sigmoid gradients are multiplicatively dependent on the value of sigmoid. Hence, if the non-activated output of any layer $h_l$ is < 0, then $\sigma$ tends to 0, effectively "vanishing" the gradient. Any layer that the current layer backprops to $H_{1:L-1}$ does not learn anything useful out of the gradients.

LSTMs and GRU

LSTM and GRU are two sophisticated implementations of RNNs that have gates (one could say that their success hinges on using gates). A gate emits probability between 0 and 1. For instance, LSTM is built on these state updates:

$L$ is a linear transformation $L(x) = W*x + b.$

$f_t = \sigma(L([h_{t-1},x_t))$

$i_t = \sigma(L([h_{t-1},x_t))$

$o_t = \sigma(L([h_{t-1},x_t))$

$\hat{C}_t = \tanh(L([h_{t-1},x_t))$

$C_t = f_t * C_{t-1}+i_t*\hat{C}_t$ (Using the forget gate, the neural network can learn to control how much information it has to retain or forget)

$h_t = o_t * \tanh(c_t)$

MODEL 4 : LSTM

Now, let's use an LSTM model to do classification! To make it a fair comparison to the SimpleRNN, let's start with the same architecture hyper-parameters (e.g., number of hidden nodes, epochs, and batch size). Then, experiment with increasing the number of nodes, stacking multiple layers, applying dropouts etc. Check the number of parameters that this model entails.

In [22]:
# %load sol4.py
model = Sequential()
model.add(Embedding(vocabulary_size, embedding_dim, input_length=max_review_length))
model.add(LSTM(100))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())

model.fit(X_train, y_train, epochs=3, batch_size=64)

# evaluation of the LSTM's performance
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
Model: "sequential_6"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_4 (Embedding)      (None, 500, 100)          1000000   
_________________________________________________________________
lstm_1 (LSTM)                (None, 100)               80400     
_________________________________________________________________
dense_10 (Dense)             (None, 1)                 101       
=================================================================
Total params: 1,080,501
Trainable params: 1,080,501
Non-trainable params: 0
_________________________________________________________________
None
/usr/local/lib/python3.7/site-packages/tensorflow_core/python/framework/indexed_slices.py:424: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
Epoch 1/3
 2368/25000 [=>............................] - ETA: 3:34 - loss: 0.7128 - accuracy: 0.5752
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
 in 
      7 print(model.summary())
      8 
----> 9 model.fit(X_train, y_train, epochs=3, batch_size=64)
     10 
     11 # evaluation of the LSTM's performance

/usr/local/lib/python3.7/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
   1237                                         steps_per_epoch=steps_per_epoch,
   1238                                         validation_steps=validation_steps,
-> 1239                                         validation_freq=validation_freq)
   1240 
   1241     def evaluate(self,

/usr/local/lib/python3.7/site-packages/keras/engine/training_arrays.py in fit_loop(model, fit_function, fit_inputs, out_labels, batch_size, epochs, verbose, callbacks, val_function, val_inputs, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq)
    194                     ins_batch[i] = ins_batch[i].toarray()
    195 
--> 196                 outs = fit_function(ins_batch)
    197                 outs = to_list(outs)
    198                 for l, o in zip(out_labels, outs):

/usr/local/lib/python3.7/site-packages/tensorflow_core/python/keras/backend.py in __call__(self, inputs)
   3738         value = math_ops.cast(value, tensor.dtype)
   3739       converted_inputs.append(value)
-> 3740     outputs = self._graph_fn(*converted_inputs)
   3741 
   3742     # EagerTensor.numpy() will often make a copy to ensure memory safety.

/usr/local/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in __call__(self, *args, **kwargs)
   1079       TypeError: For invalid positional/keyword argument combinations.
   1080     """
-> 1081     return self._call_impl(args, kwargs)
   1082 
   1083   def _call_impl(self, args, kwargs, cancellation_manager=None):

/usr/local/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _call_impl(self, args, kwargs, cancellation_manager)
   1119       raise TypeError("Keyword arguments {} unknown. Expected {}.".format(
   1120           list(kwargs.keys()), list(self._arg_keywords)))
-> 1121     return self._call_flat(args, self.captured_inputs, cancellation_manager)
   1122 
   1123   def _filtered_call(self, args, kwargs):

/usr/local/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _call_flat(self, args, captured_inputs, cancellation_manager)
   1222     if executing_eagerly:
   1223       flat_outputs = forward_function.call(
-> 1224           ctx, args, cancellation_manager=cancellation_manager)
   1225     else:
   1226       gradient_name = self._delayed_rewrite_functions.register()

/usr/local/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in call(self, ctx, args, cancellation_manager)
    509               inputs=args,
    510               attrs=("executor_type", executor_type, "config_proto", config),
--> 511               ctx=ctx)
    512         else:
    513           outputs = execute.execute_with_cancellation(

/usr/local/lib/python3.7/site-packages/tensorflow_core/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
     59     tensors = pywrap_tensorflow.TFE_Py_Execute(ctx._handle, device_name,
     60                                                op_name, inputs, attrs,
---> 61                                                num_outputs)
     62   except core._NotOkStatusException as e:
     63     if name is not None:

KeyboardInterrupt: 

MODEL 5 : CNN + LSTM

CNNs are good at learning spatial features, and sentences can be thought of as 1-D spatial vectors (dimensionality is determined by the number of words in the sentence). We apply an LSTM over the features learned by the CNN (after a maxpooling layer). This leverages the power of CNNs and LSTMs combined! We expect the CNN to be able to pick out invariant features across the 1-D spatial structure (i.e., sentence) that characterize good and bad sentiment. This learned spatial features may then be learned as sequences by an LSTM layer, and the final classification can be made via a feed-forward connection to a single node.

In [23]:
model = Sequential()
model.add(Embedding(vocabulary_size, embedding_dim, input_length=max_review_length))
model.add(Conv1D(filters=32, kernel_size=3, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(LSTM(100))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())

model.fit(X_train, y_train, epochs=3, batch_size=64)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
Model: "sequential_12"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_8 (Embedding)      (None, 500, 100)          1000000   
_________________________________________________________________
conv1d_5 (Conv1D)            (None, 500, 32)           9632      
_________________________________________________________________
max_pooling1d_5 (MaxPooling1 (None, 250, 32)           0         
_________________________________________________________________
lstm_1 (LSTM)                (None, 100)               53200     
_________________________________________________________________
dense_23 (Dense)             (None, 1)                 101       
=================================================================
Total params: 1,062,933
Trainable params: 1,062,933
Non-trainable params: 0
_________________________________________________________________
None
/usr/local/lib/python3.7/site-packages/tensorflow_core/python/framework/indexed_slices.py:424: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
Epoch 1/3
25000/25000 [==============================] - 90s 4ms/step - loss: 0.3942 - accuracy: 0.8035
Epoch 2/3
25000/25000 [==============================] - 88s 4ms/step - loss: 0.2084 - accuracy: 0.9208
Epoch 3/3
25000/25000 [==============================] - 89s 4ms/step - loss: 0.1347 - accuracy: 0.9522
Accuracy: 87.59%

CONCLUSION

We saw the power of sequence models and how they are useful in text classification. They give a solid performance, low memory footprint (thanks to shared parameters) and are able to understand and leverage the temporally connected information contained in the inputs. There is still an open debate about the performance vs memory benefits of CNNs vs RNNs in the research community.

As a side-note and bit of history: how amazing is it that we can construct these neural networks, train them, and evaluate them in just a few lines of code?! Imagine if we didn't have deep learning libraries like Keras and Tensorflow; we'd have to write backpropagation and gradient descent by hand. Our last network could easily require thousands of lines of code and many hours of debugging. This is what many people did just 8 years ago, since deep learning wasn't common and the community hadn't yet written nicely packaged libraries. Many libraries have come and gone, but nowadays most people use either Tensorflow/Keras (by Google) or PyTorch (by Facebook). I expect them to remain as great libraries for the foreseeable future, so if you're interested in deep learning, it's worth the investment to learn one, or both, of them well.

2. 231+432 = 665.... It's not ? Let's ask our LSTM

In this exercise, we are going to teach addition to our model. Given two numbers (<999), the model outputs their sum (<9999). The input is provided as a string '231+432' and the model will provide its output as ' 663' (Here the empty space is the padding character). We are not going to use any external dataset and are going to construct our own dataset for this exercise.

The exercise we attempt to do effectively "translates" a sequence of characters '231+432' to another sequence of characters ' 663' and hence, this class of models are called sequence-to-sequence models (aka seq2seq). Such architectures have profound applications in several real-life tasks such as machine translation, summarization, image captioning etc.

To be clear, sequence-to-sequence (aka seq2seq) models take as input a sequence of length N and return a sequence of length M, where N and M may or may not differ, and every single observation/input may be of different values, too. For example, machine translation concerns converting text from one natural language to another (e.g., translating English to French). Google Translate is an example, and their system is a seq2seq model. The input (e.g., an English sentence) can be of any length, and the output (e.g., a French sentence) may be of any length.

Background knowledge: The earliest and most simple seq2seq model works by having one RNN for the input, just like we've always done, and we refer to it as being an "encoder." The final hidden state of the encoder RNN is fed as input to another RNN that we refer to as the "decoder." The job of the decoder is to generate each token, one word at a time. This may seem really limiting, as it relies on the encoder encapsulating the entire input sequence with just 1 hidden layer. It seems unrealistic that we could encode an entire meaning of a sentence with just one hidden layer. Yet, results even in this simplistic manner can be quite impressive. In fact, these early results were compelling enough that these models immediately replaced the decades of earlier machine translation work.

In [26]:
from __future__ import print_function
from keras.models import Sequential
from keras import layers
from keras.layers import Dense, RepeatVector, TimeDistributed
import numpy as np
from six.moves import range

The less interesting data generation and preprocessing

In [31]:
class CharacterTable(object):
    def __init__(self, chars):        
        self.chars = sorted(set(chars))
        self.char_indices = dict((c, i) for i, c in enumerate(self.chars))
        self.indices_char = dict((i, c) for i, c in enumerate(self.chars))

    # converts a String of characters into a one-hot embedding/vector
    def encode(self, C, num_rows):        
        x = np.zeros((num_rows, len(self.chars)))
        for i, c in enumerate(C):
            x[i, self.char_indices[c]] = 1
        return x
    
    # converts a one-hot embedding/vector into a String of characters
    def decode(self, x, calc_argmax=True):        
        if calc_argmax:
            x = x.argmax(axis=-1)
        return ''.join(self.indices_char[x] for x in x)
In [32]:
TRAINING_SIZE = 50000
DIGITS = 3
MAXOUTPUTLEN = DIGITS + 1
MAXLEN = DIGITS + 1 + DIGITS

chars = '0123456789+ '
ctable = CharacterTable(chars)
In [33]:
def return_random_digit():
  return np.random.choice(list('0123456789'))  

# generate a new number of length `DIGITS`
def generate_number():
  num_digits = np.random.randint(1, DIGITS + 1)  
  return int(''.join( return_random_digit()
                      for i in range(num_digits)))

# generate `TRAINING_SIZE` # of pairs of random numbers
def data_generate(num_examples):
  questions = []
  answers = []
  seen = set()
  print('Generating data...')
  while len(questions) < TRAINING_SIZE:      
      a, b = generate_number(), generate_number()
        
      # don't allow duplicates; this is good practice for training,
      # as we will minimize memorizing seen examples
      key = tuple(sorted((a, b)))
      if key in seen:
          continue
      seen.add(key)
    
      # pad the data with spaces so that the length is always MAXLEN.
      q = '{}+{}'.format(a, b)
      query = q + ' ' * (MAXLEN - len(q))
      ans = str(a + b)
    
      # answers can be of maximum size DIGITS + 1.
      ans += ' ' * (MAXOUTPUTLEN - len(ans))
      questions.append(query)
      answers.append(ans)
  print('Total addition questions:', len(questions))
  return questions, answers

def encode_examples(questions, answers):
  x = np.zeros((len(questions), MAXLEN, len(chars)), dtype=np.bool)
  y = np.zeros((len(questions), DIGITS + 1, len(chars)), dtype=np.bool)
  for i, sentence in enumerate(questions):
      x[i] = ctable.encode(sentence, MAXLEN)
  for i, sentence in enumerate(answers):
      y[i] = ctable.encode(sentence, DIGITS + 1)

  indices = np.arange(len(y))
  np.random.shuffle(indices)
  return x[indices],y[indices]
In [34]:
q,a = data_generate(TRAINING_SIZE)
x,y = encode_examples(q,a)

# divides our data into training and validation
split_at = len(x) - len(x) // 10
x_train, x_val, y_train, y_val = x[:split_at], x[split_at:],y[:split_at],y[split_at:]

print('Training Data shape:')
print('X : ', x_train.shape)
print('Y : ', y_train.shape)

print('Sample Question(in encoded form) : ', x_train[0], y_train[0])
print('Sample Question(in decoded form) : ', ctable.decode(x_train[0]),'Sample Output : ', ctable.decode(y_train[0]))
Generating data...
Total addition questions: 50000
Training Data shape:
X :  (45000, 7, 12)
Y :  (45000, 4, 12)
Sample Question(in encoded form) :  [[False False False False  True False False False False False False False]
 [False False False  True False False False False False False False False]
 [False False False False False False False  True False False False False]
 [False  True False False False False False False False False False False]
 [False False False False False False False False False False  True False]
 [False False False False False False  True False False False False False]
 [ True False False False False False False False False False False False]] [[False False False False  True False False False False False False False]
 [False False False False False False False False False False False  True]
 [False False False False False False False False False False False  True]
 [ True False False False False False False False False False False False]]
Sample Question(in decoded form) :  215+84  Sample Output :  299 

Let's learn two wrapper functions in Keras - TimeDistributed and RepeatVector with some dummy examples.

TimeDistributed is a wrapper function call that applies an input operation on all the timesteps of an input data. For instance, if I have a feed-forward network which converts a 10-dim vector to a 5-dim vector, then wrapping this TimeDistributed layer on that feed-forward operation would convert a batch_size * sentence_len * vector_len(=10) to batch_size * sentence_len * output_len(=5)

In [35]:
model = Sequential()
#Inputs to it will be batch_size*time_steps*input_vector_dim(to Dense)
# Output will be batch_size*time_steps* output_vector_dim
# Here, Dense() converts a 5-dim input vector to a 8-dim vector.
model.add(TimeDistributed(Dense(8), input_shape=(3, 5)))
input_array = np.random.randint(10, size=(1,3,5))
print("Shape of input : ", input_array.shape)

model.compile('rmsprop', 'mse')
output_array = model.predict(input_array)
print("Shape of output : ", output_array.shape)
Shape of input :  (1, 3, 5)
Shape of output :  (1, 3, 8)

RepeatVector repeats the vector a specified number of times. Dimension changes from batch_size number of elements to batch_size number of repetitions * number of elements.

In [ ]:
model = Sequential()
# converts from 1*10 to 1*6
model.add(Dense(6, input_dim=10))
print(model.output_shape)

# converts from 1*6 to 1*3*6
model.add(RepeatVector(3))
print(model.output_shape) 

input_array = np.random.randint(1000, size=(1, 10))
print("Shape of input : ", input_array.shape)

model.compile('rmsprop', 'mse')
output_array = model.predict(input_array)

print("Shape of output : ", output_array.shape)
# note: `None` is the batch dimension
print('Input : ', input_array[0])
print('Output : ', output_array[0])

MODEL ARCHITECTURE

Note: Whenever you are initializing a LSTM in Keras, by the default the option return_sequences = False. This means that at the end of the step the next component will only get to see the final hidden layer's values. On the other hand, if you set return_sequences = True, the LSTM component will return the hidden layer at each time step. It means that the next component should be able to consume inputs in that form.

Think how this statement is relevant in terms of this model architecture and the TimeDistributed module we just learned.

Build an encoder and decoder both single layer 128 nodes and an appropriate dense layer as needed by the model.

In [37]:
# Hyperaparams
RNN = layers.LSTM
HIDDEN_SIZE = 128
BATCH_SIZE = 128
LAYERS = 1

print('Build model...')
model = Sequential()

#ENCODING
model.add(RNN(HIDDEN_SIZE, input_shape=(MAXLEN, len(chars))))
model.add(RepeatVector(MAXOUTPUTLEN))

#DECODING
for _ in range(LAYERS):
    # return hidden layer at each time step
    model.add(RNN(HIDDEN_SIZE, return_sequences=True)) 

model.add(TimeDistributed(layers.Dense(len(chars), activation='softmax')))
model.compile(loss='categorical_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])
model.summary()
Build model...
Model: "sequential_17"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_5 (LSTM)                (None, 128)               72192     
_________________________________________________________________
repeat_vector_2 (RepeatVecto (None, 4, 128)            0         
_________________________________________________________________
lstm_6 (LSTM)                (None, 4, 128)            131584    
_________________________________________________________________
time_distributed_3 (TimeDist (None, 4, 12)             1548      
=================================================================
Total params: 205,324
Trainable params: 205,324
Non-trainable params: 0
_________________________________________________________________

Let's check how well our model trained.

In [38]:
for iteration in range(1, 2):
    print()  
    model.fit(x_train, y_train,
              batch_size=BATCH_SIZE,
              epochs=20,
              validation_data=(x_val, y_val))
    # Select 10 samples from the validation set at random so
    # we can visualize errors.
    print('Finished iteration ', iteration)
    numcorrect = 0
    numtotal = 20
    
    for i in range(numtotal):
        ind = np.random.randint(0, len(x_val))
        rowx, rowy = x_val[np.array([ind])], y_val[np.array([ind])]
        preds = model.predict_classes(rowx, verbose=0)
        q = ctable.decode(rowx[0])
        correct = ctable.decode(rowy[0])
        guess = ctable.decode(preds[0], calc_argmax=False)
        print('Question', q, end=' ')
        print('True', correct, end=' ')
        print('Guess', guess, end=' ')
        if guess == correct :
          print('Good job')
          numcorrect += 1
        else:
          print('Fail')
    print('The model scored ', numcorrect*100/numtotal,' % in its test.')
Train on 45000 samples, validate on 5000 samples
Epoch 1/20
45000/45000 [==============================] - 10s 213us/step - loss: 1.8913 - accuracy: 0.3215 - val_loss: 1.8148 - val_accuracy: 0.3392
Epoch 2/20
45000/45000 [==============================] - 9s 194us/step - loss: 1.7547 - accuracy: 0.3544 - val_loss: 1.6947 - val_accuracy: 0.3761
Epoch 3/20
45000/45000 [==============================] - 8s 188us/step - loss: 1.6045 - accuracy: 0.3990 - val_loss: 1.5127 - val_accuracy: 0.4290
Epoch 4/20
45000/45000 [==============================] - 10s 223us/step - loss: 1.4356 - accuracy: 0.4599 - val_loss: 1.3487 - val_accuracy: 0.4931
Epoch 5/20
45000/45000 [==============================] - 11s 239us/step - loss: 1.2919 - accuracy: 0.5167 - val_loss: 1.2355 - val_accuracy: 0.5404
Epoch 6/20
45000/45000 [==============================] - 9s 196us/step - loss: 1.1851 - accuracy: 0.5556 - val_loss: 1.1403 - val_accuracy: 0.5719
Epoch 7/20
45000/45000 [==============================] - 9s 191us/step - loss: 1.0988 - accuracy: 0.5872 - val_loss: 1.0659 - val_accuracy: 0.5950
Epoch 8/20
45000/45000 [==============================] - 9s 194us/step - loss: 1.0235 - accuracy: 0.6134 - val_loss: 0.9967 - val_accuracy: 0.6226
Epoch 9/20
45000/45000 [==============================] - 9s 198us/step - loss: 0.9560 - accuracy: 0.6388 - val_loss: 0.9391 - val_accuracy: 0.6388
Epoch 10/20
45000/45000 [==============================] - 9s 207us/step - loss: 0.8848 - accuracy: 0.6645 - val_loss: 0.8588 - val_accuracy: 0.6694
Epoch 11/20
45000/45000 [==============================] - 10s 217us/step - loss: 0.7927 - accuracy: 0.6988 - val_loss: 0.7477 - val_accuracy: 0.7097
Epoch 12/20
45000/45000 [==============================] - 9s 198us/step - loss: 0.6750 - accuracy: 0.7465 - val_loss: 0.6294 - val_accuracy: 0.7650
Epoch 13/20
45000/45000 [==============================] - 10s 223us/step - loss: 0.5542 - accuracy: 0.8015 - val_loss: 0.5125 - val_accuracy: 0.8166
Epoch 14/20
45000/45000 [==============================] - 10s 217us/step - loss: 0.4452 - accuracy: 0.8534 - val_loss: 0.4086 - val_accuracy: 0.8692
Epoch 15/20
45000/45000 [==============================] - 9s 197us/step - loss: 0.3586 - accuracy: 0.8935 - val_loss: 0.3416 - val_accuracy: 0.8946
Epoch 16/20
45000/45000 [==============================] - 9s 200us/step - loss: 0.2982 - accuracy: 0.9166 - val_loss: 0.2824 - val_accuracy: 0.9196
Epoch 17/20
45000/45000 [==============================] - 9s 199us/step - loss: 0.2379 - accuracy: 0.9403 - val_loss: 0.2302 - val_accuracy: 0.9381
Epoch 18/20
45000/45000 [==============================] - 9s 190us/step - loss: 0.1988 - accuracy: 0.9516 - val_loss: 0.1899 - val_accuracy: 0.9512
Epoch 19/20
45000/45000 [==============================] - 9s 203us/step - loss: 0.1605 - accuracy: 0.9644 - val_loss: 0.1664 - val_accuracy: 0.9562
Epoch 20/20
45000/45000 [==============================] - 9s 195us/step - loss: 0.1354 - accuracy: 0.9705 - val_loss: 0.1462 - val_accuracy: 0.9622
Finished iteration  1
Question 551+581 True 1132 Guess 1132 Good job
Question 332+746 True 1078 Guess 1078 Good job
Question 736+654 True 1390 Guess 1490 Fail
Question 77+705  True 782  Guess 782  Good job
Question 7+266   True 273  Guess 273  Good job
Question 18+7    True 25   Guess 25   Good job
Question 123+666 True 789  Guess 889  Fail
Question 886+48  True 934  Guess 934  Good job
Question 6+862   True 868  Guess 868  Good job
Question 492+378 True 870  Guess 870  Good job
Question 59+30   True 89   Guess 89   Good job
Question 542+95  True 637  Guess 637  Good job
Question 58+738  True 796  Guess 796  Good job
Question 29+62   True 91   Guess 91   Good job
Question 217+21  True 238  Guess 238  Good job
Question 140+811 True 951  Guess 951  Good job
Question 601+814 True 1415 Guess 1425 Fail
Question 815+337 True 1152 Guess 1152 Good job
Question 578+52  True 630  Guess 630  Good job
Question 3+424   True 427  Guess 427  Good job
The model scored  85.0  % in its test.

EXERCISE

  • Try changing the hyperparams, use other RNNs, more layers, check if increasing the number of epochs is useful.

  • Try reversing the data from validation set and check if commutative property of addition is learned by the model.

  • Try printing the hidden layer with two inputs that are commutative and check if the hidden representations it learned are same or similar. Do we expect it to be true? If so, why? If not why? You can access the layer using an index with model.layers and layer.output will give the output of that layer.
  • Try doing addition in the RNN model the same way we do by hand. Reverse the order of digits and at each time step, input two digits get an output use the hidden layer and input next two digits and so on.(units in the first time step, tens in the second time step etc.)