{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Title :\n",
"\n",
"Exercise: Bagging vs Random Forest (Tree correlation)\n",
"\n",
"## Description :\n",
"\n",
"- How does Random Forest improve on Bagging? \n",
" - The goal of this exercise is to investigate the correlation between randomly selected trees from Bagging and Random Forest. \n",
" \n",
"## Instructions:\n",
"\n",
"- Read the dataset `diabetes.csv` as a pandas dataframe, and take a quick look at the data.\n",
"- Split the data into train and validation sets.\n",
"- Define a `BaggingClassifier` model that uses `DecisionTreClassifier` as its base estimator.\n",
"- Specify the number of bootstraps as 1000 and a maximum depth of 3.\n",
"- Fit the `BaggingClassifier` model on the train data.\n",
"- Use the helper code to predict using the mean model and individual estimators. The plot will look similar to the one given below.\n",
"- Predict on the test data using the first estimator and the mean model.\n",
"- Compute and display the validation accuracy\n",
"- Repeat the modeling and classification process above, this time using a `RandomForestClassifier`.\n",
"- Your final output will look something like this:\n",
"\n",
"\n",
"\n",
"\n",
"## Hints: \n",
"\n",
"sklearn.train_test_split()\n",
"Split arrays or matrices into random train and test subsets.\n",
"\n",
"sklearn.ensemble.BaggingClassifier()\n",
"Returns a Bagging classifier instance.\n",
"\n",
"sklearn.tree.DecisionTreeClassifier()\n",
"A Tree classifier can be used as the base model for the Bagging classifier.\n",
"\n",
"sklearn.ensemble.andomForestClassifier()\n",
"Defines a Random forest classifier.\n",
"\n",
"sklearn.metrics.accuracy_score(y_true, y_pred)\n",
"Accuracy classification score. "
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"#!pip install -qq dtreeviz\n",
"import os, sys\n",
"sys.path.append(f\"{os.getcwd()}/../\")"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"# Import the main packages\n",
"\n",
"import numpy as np\n",
"import pandas as pd\n",
"import matplotlib.pyplot as plt\n",
"from dtreeviz.trees import dtreeviz\n",
"from sklearn.metrics import accuracy_score\n",
"from sklearn.tree import DecisionTreeClassifier\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.ensemble import RandomForestClassifier, BaggingClassifier\n",
"\n",
"\n",
"%matplotlib inline\n",
"\n",
"colors = [None, # 0 classes\n",
" None, # 1 class\n",
" ['#FFF4E5','#D2E3EF'],# 2 classes\n",
" ]\n",
"\n",
"from IPython.display import Markdown, display\n",
"def printmd(string):\n",
" display(Markdown(string))\n",
" "
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"# Read the dataset and take a quick look\n",
"df = pd.read_csv(\"diabetes.csv\")\n",
"df.head()\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"### edTest(test_assign) ###\n",
"# Assign the predictor and response variables. \n",
"# \"Outcome\" is the response and all the other columns are the predictors\n",
"X = __ \n",
"y = __ \n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"# Fix a random_state\n",
"random_state = 144\n",
"\n",
"# Split the data into train and validation sets with 80% train size\n",
"# and the above set random state\n",
"X_train, X_val, y_train,y_val = train_test_split(___)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Bagging Implementation"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"# Define a Bagging classifier with randon_state as above\n",
"# and with a DecisionClassifier as a basemodel\n",
"# Fix the max_depth variable to 20 for all trees\n",
"max_depth = ___\n",
"\n",
"# Set the 100 estimators\n",
"n_estimators = ___\n",
"\n",
"# Initialize the Decision Tree classsifier with the set max depth and \n",
"# random state\n",
"basemodel = ___\n",
"\n",
"# Initialize a Bagging classsifier with the Decision Tree as the base and \n",
"# estimator and the number of estimator defined above\n",
"bagging = ___\n",
"\n",
"# Fit the Bagging model on the training set\n",
"___\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"### edTest(test_bagging) ###\n",
"\n",
"# Use the trained model to predict on the validation data\n",
"predictions = ___\n",
"\n",
"# Compute the accuracy on the validation set\n",
"acc_bag = round(accuracy_score(predictions, y_val),2)\n",
"\n",
"# Print the validation data accuracy\n",
"print(f'For Bagging, the accuracy on the validation set is {acc_bag}')\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Random Forest implementation"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"# Define a Random Forest classifier with random_state as defined above\n",
"# and set the maximum depth to be max_depth and use 100 estimators\n",
"random_forest = ___\n",
"\n",
"# Fit the model on the training set\n",
"___\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"### edTest(test_RF) ###\n",
"\n",
"# Use the trained Random Forest model to predict on the validation data\n",
"predictions = ___\n",
"\n",
"# Compute the accuracy on the validation set\n",
"acc_rf = round(accuracy_score(predictions, y_val),2)\n",
"\n",
"# Print the validation data accuracy\n",
"print(f'For Random Forest, the accuracy on the validation set is {acc_rf}')\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Visualizing the trees - Bagging"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"# Helper code to visualize the Bagging tree\n",
"# Reduce the max_depth for better visualization \n",
"max_depth = 3\n",
"\n",
"basemodel = DecisionTreeClassifier(max_depth=max_depth, \n",
" random_state=random_state)\n",
"\n",
"bagging = BaggingClassifier(base_estimator=basemodel, \n",
" n_estimators=1000)\n",
"\n",
"# Fit the model on the training set\n",
"bagging.fit(X_train, y_train)\n",
"\n",
"# Selecting two trees at random\n",
"bagvati1 = bagging.estimators_[0]\n",
"bagvati2 = bagging.estimators_[100]\n"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"vizA = dtreeviz(bagvati1, df.iloc[:,:8],df.Outcome,\n",
" feature_names = df.columns[:8],\n",
" target_name = 'Diabetes', class_names= ['No','Yes']\n",
" ,orientation = 'TD',\n",
" colors={'classes':colors},\n",
" label_fontsize=14,\n",
" ticks_fontsize=10,\n",
" )\n",
"printmd('