CS109A Introduction to Data Science

Lab 2 Scraping

Harvard University
Summer 2018
Instructors: Pavlos Protopapas and Kevin Rader
Lab Instructors: Rahul Dave
Authors: Rahul Dave, David Sondak, Will Claybaugh and Pavlos Protopaps


In [1]:
## RUN THIS CELL TO GET THE RIGHT FORMATTING 
from IPython.core.display import HTML
def css_styling():
    styles = open("../../styles/cs109.css", "r").read()
    return HTML(styles)
css_styling()
In [2]:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn.apionly as sns
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
In [3]:
import time, requests

In this lab, we'll scrape Goodread's Best Books list:

https://www.goodreads.com/list/show/1.Best_Books_Ever?page=1 .

We'll walk through scraping the list pages for the book names/urls

Table of Contents

  1. Learning Goals
  2. Exploring the Web pages and downloading them
  3. Parse the page, extract book urls
  4. Parse a book page, extract book properties
  5. Set up a pipeline for fetching and parsing

Learning Goals

Understand the structure of a web page. Use Beautiful soup to scrape content from these web pages.

This lab corresponds to lectures 2, 3 and 4 and maps on to homework 1 and further.

1. Exploring the web pages and downloading them

We're going to see the structure of Goodread's best books list. We'll use the Developer tools in chrome, safari and firefox have similar tools available

To getch this page we use the requests module. But are we allowed to do this? Lets check:

https://www.goodreads.com/robots.txt

Yes we are.

In [4]:
URLSTART="https://www.goodreads.com"
BESTBOOKS="/list/show/1.Best_Books_Ever?page="
url = URLSTART+BESTBOOKS+'1'
print(url)
page = requests.get(url)

We can see properties of the page. Most relevant are status_code and text. The former tells us if the web-page was found, and if found , ok. (See lecture notes.)

In [5]:
page.status_code # 200 is good
In [6]:
page.text[:5000]

Let us write a loop to fetch 2 pages of "best-books" from goodreads. Notice the use of a format string. This is an example of old-style python format strings

In [7]:
URLSTART="https://www.goodreads.com"
BESTBOOKS="/list/show/1.Best_Books_Ever?page="
for i in range(1,3):
    bookpage=str(i)
    stuff=requests.get(URLSTART+BESTBOOKS+bookpage)
    filetowrite="files/page"+ '%02d' % i + ".html"
    print("FTW", filetowrite)
    fd=open(filetowrite,"w")
    fd.write(stuff.text)
    fd.close()
    time.sleep(2)

2. Parse the page, extract book urls

Notice how we do file input-output, and use beautiful soup in the code below. The with construct ensures that the file being read is closed, something we do explicitly for the file being written. We look for the elements with class bookTitle, extract the urls, and write them into a file

In [8]:
from bs4 import BeautifulSoup
In [9]:
bookdict={}
for i in range(1,3):
    books=[]
    stri = '%02d' % i
    filetoread="files/page"+ stri + '.html'
    print("FTW", filetoread)
    with open(filetoread) as fdr:
        data = fdr.read()
    soup = BeautifulSoup(data, 'html.parser')
    for e in soup.select('.bookTitle'):
        books.append(e['href'])
    print(books[:10])
    bookdict[stri]=books
    fd=open("files/list"+stri+".txt","w")
    fd.write("\n".join(books))
    fd.close()

Here is George Orwell's 1984

In [10]:
bookdict['02'][0]

Lets go look at the first URLs on both pages

3. Parse a book page, extract book properties

Ok so now lets dive in and get one of these these files and parse them.

In [11]:
furl=URLSTART+bookdict['02'][0]
furl

In [12]:
fstuff=requests.get(furl)
print(fstuff.status_code)
In [13]:
d=BeautifulSoup(fstuff.text, 'html.parser')
In [14]:
d.select("meta[property='og:title']")[0]['content']

Lets get everything we want...

In [15]:
d=BeautifulSoup(fstuff.text, 'html.parser')
print(
"title", d.select_one("meta[property='og:title']")['content'],"\n",
"isbn", d.select("meta[property='books:isbn']")[0]['content'],"\n",
"type", d.select("meta[property='og:type']")[0]['content'],"\n",
"author", d.select("meta[property='books:author']")[0]['content'],"\n",
"average rating", d.select_one("span.average").text,"\n",
"ratingCount", d.select("meta[itemprop='ratingCount']")[0]["content"],"\n",
"reviewCount", d.select_one("span.count")["title"]
)

Ok, now that we know what to do, lets wrap our fetching into a proper script. So that we dont overwhelm their servers, we will only fetch 5 from each page, but you get the idea...

We'll segue of a bit to explore new style format strings. See https://pyformat.info for more info.

In [16]:
"list{:0>2}.txt".format(3)
In [17]:
a = "4"
b = 4
class Four:
    def __str__(self):
        return "Fourteen"
c=Four()
In [18]:
"The hazy cat jumped over the {} and {} and {}".format(a, b, c)

4. Set up a pipeline for fetching and parsing

Ok lets get back to the fetching...

In [20]:
fetched=[]
for i in range(1,3):
    with open("files/list{:0>2}.txt".format(i)) as fd:
        counter=0
        for bookurl_line in fd:
            if counter > 4:
                break
            bookurl=bookurl_line.strip()
            stuff=requests.get(URLSTART+bookurl)
            filetowrite=bookurl.split('/')[-1]
            filetowrite="files/"+str(i)+"_"+filetowrite+".html"
            print("FTW", filetowrite)
            fd=open(filetowrite,"w", encoding='utf-8')
            fd.write(stuff.text)
            fd.close()
            fetched.append(filetowrite)
            time.sleep(2)
            counter=counter+1
            
print(fetched)

Ok we are off to parse each one of the html pages we fetched. We have provided the skeleton of the code and the code to parse the year, since it is a bit more complex...see the difference in the screenshots above.

In [21]:
import re
yearre = r'\d{4}'
def get_year(d):
    if d.select_one("nobr.greyText"):
        return d.select_one("nobr.greyText").text.strip().split()[-1][:-1]
    else:
        thetext=d.select("div#details div.row")[1].text.strip()
        rowmatch=re.findall(yearre, thetext)
        if len(rowmatch) > 0:
            rowtext=rowmatch[0].strip()
        else:
            rowtext="NA"
        return rowtext
Exercise

Your job is to fill in the code to get the genres.

In [22]:
def get_genres(d):
    # your code here
In [23]:
listofdicts=[]
for filetoread in fetched:
    print(filetoread)
    td={}
    with open(filetoread) as fd:
        datext = fd.read()
    d=BeautifulSoup(datext, 'html.parser')
    td['title']=d.select_one("meta[property='og:title']")['content']
    td['isbn']=d.select_one("meta[property='books:isbn']")['content']
    td['booktype']=d.select_one("meta[property='og:type']")['content']
    td['author']=d.select_one("meta[property='books:author']")['content']
    td['rating']=d.select_one("span.average").text
    td['ratingCount']=d.select_one("meta[itemprop='ratingCount']")["content"]
    td['reviewCount']=d.select_one("span.count")["title"]
    td['year'] = get_year(d)
    td['file']=filetoread
    glist = get_genres(d)
    td['genres']="|".join(glist)
    listofdicts.append(td)
In [24]:
listofdicts[0]

Finally lets write all this stuff into a csv file which we will use to do analysis.

In [25]:
df = pd.DataFrame.from_records(listofdicts)
df.head()
In [26]:
df.to_csv("files/meta.csv", index=False, header=True)