Title¶
Recreating image of Pavlos
Description :¶
The aim of this exercise is to understand image reconstruction using autoencoders.
Instructions:¶
- Load the 3 images given.
- Use the helper code to reshape and flatten the images.
- Create an autoencoder by defining the encode and decode layers.
- Fit on the image of Pavlos.
- Visualise the train and validation loss. This will look similar to the image given above.
- Go through the reconstruction part carefully and understand what is happening in each step. There is no code you will have to fill in this part.
Reconstruction Description:¶
The reconstruction part of this exercise tries to recreate the input image given. It is important to remember that our model has learnt from only one image i.e. the one of Pavlos. In the first section of this part, we give Pavlos's image as the input and the model recreates and outputs the correct image with very little noise. This looks similar to the image given below.
We then give the image of an eagle as the input, however, the output is still Pavlos! The noise is represented in the 3 part of the image given below.
Finally, we try to get a different output image by giving the input image of a human. Pavlos triumphs again.
Hints:¶
keras.compile() Compiles the layers into a network.
keras.Sequential() Models a sequential neural network.
keras.Dense() A regular densely-connected NN layer.
# Import the libraries
import tensorflow as tf
import pandas as pd
import numpy as np
from sklearn.datasets import load_iris
from sklearn.preprocessing import MinMaxScaler
from matplotlib import pyplot as plt
%matplotlib inline
from tensorflow.keras.models import Sequential, Model
import tensorflow.keras as keras
from tensorflow.keras import layers, Model, Input
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam
from PIL import Image
import scipy.ndimage as ndi
# Loading the 3 images
pavlos_img_ptr = np.array(Image.open("pavlos.jpeg"))
notpavlos_img_ptr = np.array(Image.open("not-pavlos.jpeg"))
notpavlos2_img_ptr = np.array(Image.open("not-pavlos2.jpg"))
# Helper function to re-size the images
def img_resize(imgs_in, factor):
imgs_out_train = ndi.zoom(imgs_in, (1, factor, factor, 1), order=2)
return imgs_out_train
# Use the helper code to reduce image size to 100x100
SIZE=100
pavlos_img_ptr = pavlos_img_ptr[:,:,2].reshape(1,150,150,1)
pavlos_img_ptr = img_resize(pavlos_img_ptr, SIZE/pavlos_img_ptr.shape[1])
pavlos_img_nice = pavlos_img_ptr
notpavlos_img_ptr = notpavlos_img_ptr[:,:,2].reshape(1,132,132,1)
notpavlos_img_ptr = img_resize(notpavlos_img_ptr,SIZE/notpavlos_img_ptr.shape[1])
notpavlos_img_nice = notpavlos_img_ptr
notpavlos2_img_ptr = notpavlos2_img_ptr[:,:,2].reshape(1,100,100,1)
notpavlos2_img_ptr = img_resize(notpavlos2_img_ptr, SIZE/notpavlos2_img_ptr.shape[1])
notpavlos2_img_nice = notpavlos2_img_ptr
# Flatten the images
pavlos_flatten = pavlos_img_nice.reshape(100*100,1)
print(pavlos_flatten.shape)
notpavlos_flatten = notpavlos_img_nice.reshape(100*100,1)
print(notpavlos_flatten.shape)
notpavlos2_flatten = notpavlos2_img_nice.reshape(100*100,1)
print(notpavlos2_flatten.shape)
Create model and train¶
### edTest(test_check) ###
# Create an Autoencoder and fit it with our data using
# 8 neurons in the dense layer using keras' functional API
# Get the input size from the shape of the flattened image
input_dim = pavlos_flatten.shape[0]
encoding_dim = 8
# Create an input "layer" using input_dim as a parameter
input_section = ___
# Create a denser layer as the encode layer with 8 neurons and linear activation
encoded = ___
# Create an autoencoder model which has input as input_section and outputs encoded
encoder = ___
# Decoder
# Create an input "layer" using encoding_dim as shape
latent_input = ___
# Create a denser layer as the encode layer with input_dim and linear activation
decoded = ___
# Create a model which has input as latent_input and outputs decoded
decoder = ___
### edTest(test_architecture) ###
# Create an autoencoder using keras Sequential
autoencoder = ___
# Add the encoder followed by the decoder initialised above to the autoencoder model
autoencoder.___
autoencoder.___
# Compile the model with mse as the loss and Adam optimizer with parameter 0.001
autoencoder.___
# Take a look at the summary of the model
autoencoder.summary()
# Get the history of the model by fitting on pavlos_flatten after reshape
# Specify 100 epochs and batch size of 1000 with verbose=1
# keras expects a shape of (1,n) in the case of a flattened input.
history = ___
# Use the helper function to plot the loss
plt.plot(np.log(history.history['loss']),linewidth=2,color='darkblue' )
plt.title('Epochs vs Training loss')
plt.ylabel('Log loss')
plt.xlabel('Epoch')
plt.legend(['Train'], loc='best')
plt.show()
Reconstruct Pavlos¶
### Reconstruct Pavlos
pavlos_flatten_reconstructed = autoencoder(pavlos_flatten.reshape(-1,input_dim)).numpy()
pavlos_reconstructed = pavlos_flatten_reconstructed.reshape(100,100)
# Helper code to display the images
fig, ax = plt.subplots(1,3, figsize=(9,4))
ax[0].imshow(pavlos_img_nice.reshape(100,100), cmap='gray')
ax[0].set_title('Original')
ax[0].axis('off')
ax[1].imshow(pavlos_reconstructed, cmap='gray')
ax[1].set_title('Recon')
ax[1].axis('off')
ax[2].imshow(pavlos_img_nice.reshape(100,100) - pavlos_reconstructed, cmap='gray');
ax[2].axis('off');
Reconstruct Eagle¶
### Reconstruct eagle
notpavlos_flatten_reconstructed = autoencoder(notpavlos_flatten.reshape(-1,10000)).numpy()
notpavlos_reconstructed = notpavlos_flatten_reconstructed.reshape(100,100)
# Helper code to display the images
fig, ax = plt.subplots(1,3, figsize=(9,4))
ax[0].imshow(notpavlos_img_nice.reshape(100,100),cmap='gray')
ax[0].set_title('Eagle - original (A)')
ax[0].axis('off')
ax[1].imshow(notpavlos_reconstructed,cmap='gray')
ax[1].set_title('Eagle - Recon (B)')
ax[1].axis('off')
ax[2].imshow(notpavlos_img_nice.reshape(100,100) - notpavlos_reconstructed,cmap='gray')
ax[2].set_title('A - B')
ax[2].axis('off');
Reconstruct Not Pavlos¶
### Reconstruct an image that is not of Pavlos
notpavlos2_flatten_reconstructed = autoencoder(notpavlos2_flatten.reshape(-1,input_dim)).numpy()
notpavlos2_reconstructed = notpavlos2_flatten_reconstructed.reshape(100,100)
# Helper code to display the images
fig, ax = plt.subplots(1,3, figsize=(9,4))
ax[0].imshow(notpavlos2_img_nice.reshape(100,100),cmap='gray')
ax[0].set_title('Marios - Not Pavlos \n original (A)')
ax[0].axis('off')
ax[1].imshow(notpavlos2_reconstructed,cmap='gray')
ax[1].set_title('Marios - Not Pavlos \n Recon (B)')
ax[1].axis('off')
ax[2].imshow(notpavlos2_img_nice.reshape(100,100) - notpavlos_reconstructed,cmap='gray')
ax[2].set_title('A - B')
ax[2].axis('off');
Mindchow 🍲¶
Go back and decrease the number of epochs to see when the reconstruction starts getting grainy.
Your answer here