{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# CS109B Data Science 2: Advanced Topics in Data Science \n", "\n", "## Lab 4 - Bayesian Analysis\n", "\n", "**Harvard University**
\n", "**Spring 2020**
\n", "**Instructors:** Mark Glickman, Pavlos Protopapas, and Chris Tanner
\n", "**Lab Instructors:** Chris Tanner and Eleni Angelaki Kaxiras
\n", "**Content:** Eleni Angelaki Kaxiras\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n" ], "text/plain": [ "" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "## RUN THIS CELL TO PROPERLY HIGHLIGHT THE EXERCISES\n", "import requests\n", "from IPython.core.display import HTML\n", "styles = requests.get(\"https://raw.githubusercontent.com/Harvard-IACS/2019-CS109B/master/content/styles/cs109.css\").text\n", "HTML(styles)" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "import pymc3 as pm\n", "from pymc3 import summary" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import matplotlib.pyplot as plt\n", "import scipy.stats as stats\n", "import pandas as pd\n", "%matplotlib inline \n", "\n", "import warnings\n", "warnings.filterwarnings('ignore')" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Running on PyMC3 v3.8\n" ] } ], "source": [ "print('Running on PyMC3 v{}'.format(pm.__version__))" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "application/javascript": [ "IPython.OutputArea.auto_scroll_threshold = 20000;\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "%%javascript\n", "IPython.OutputArea.auto_scroll_threshold = 20000;" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "## Learning Objectives\n", "\n", "By the end of this lab, you should be able to:\n", "* Understand how probability distributions work.\n", "* Apply Bayes Rule in calculating probabilities.\n", "* Understand how to apply Bayesian analysis using PyMC3\n", "* Avoid getting fired when talking to your Bayesian employer.\n", "\n", "**This lab corresponds to Lectures 6, 7, and 8, and maps to Homework 3.**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Table of Contents\n", "\n", "1. The Bayesian Way of Thinking or Is this a Fair Coin?\n", "2. [Intro to `pyMC3`](#pymc3). \n", "3. [Bayesian Linear Regression](#blr).\n", "4. [Try this at Home: Example on Mining Disasters](#no4)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. The Bayesian way of Thinking\n", "\n", "```\n", "Here is my state of knowledge about the situation. Here is some data, I am now going to revise my state of knowledge.\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
Table Exercise: Discuss the statement above with your table mates and make sure everyone understands what it means and what constitutes Bayesian way of thinking. Finally, count the Bayesians among you.
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### A. Bayes Rule\n", "\n", "\\begin{equation}\n", "\\label{eq:bayes} \n", "P(A|\\textbf{B}) = \\frac{P(\\textbf{B} |A) P(A) }{P(\\textbf{B})} \n", "\\end{equation}\n", "\n", "$P(A|\\textbf{B})$ is the **posterior** distribution, prob(hypothesis | data) \n", "\n", "$P(\\textbf{B} |A)$ is the **likelihood** function, how probable is my data **B** for different values of the parameters\n", "\n", "$P(A)$ is the marginal probability to observe the data, called the **prior**, this captures our belief about the data before observing it.\n", "\n", "$P(\\textbf{B})$ is the marginal distribution (sometimes called marginal likelihood)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "
Table Exercise: Solve the Monty Hall Paradox using Bayes Rule.
\n", "\n", "![montyhall](../images/montyhall.jpeg)\n", "\n", "You are invited to play a game. There are 3 doors behind **one** of which are the keys to a brand new red Tesla. There is a goat behind each of the other two. \n", "\n", "You are asked to pick one door, and let's say you pick **Door1**. The host knows where the keys are. Of the two remaining closed doors, he will always open the door that has a goat behind it. He'll say \"I will do you a favor and open **Door2**\". So he opens Door2 inside which there is, of course, a goat. He now asks you, do you want to open the initial Door you chose or change to **Door3**? Generally, in this game, when you are presented with this choice should you swap the doors?\n", "\n", "**Initial Steps:**\n", "- Start by defining the `events` of this probabilities game. One definition is:\n", " \n", " - $A_i$: car is behind door $i$ \n", " \n", " - $B_i$ host opens door $i$\n", " \n", "$i\\in[1,2,3]$\n", " \n", "- In more math terms, the question is: is the probability that the price is behind **Door 1** higher than the probability that the price is behind **Door2**, given that an event **has occured**?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### B. Bayes Rule written with Probability Distributions\n", "\n", "We have data that we believe come from an underlying distribution of unknown parameters. If we find those parameters, we know everything about the process that generated this data and we can make inferences (create new data).\n", "\n", "\\begin{equation}\n", "\\label{eq:bayes} \n", "P(\\theta|\\textbf{D}) = \\frac{P(\\textbf{D} |\\theta) P(\\theta) }{P(\\textbf{D})} \n", "\\end{equation}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### But what is $\\theta \\;$?\n", "\n", "$\\theta$ is an unknown yet fixed set of parameters. In Bayesian inference we express our belief about what $\\theta$ might be and instead of trying to guess $\\theta$ exactly, we look for its **probability distribution**. What that means is that we are looking for the **parameters** of that distribution. For example, for a Poisson distribution our $\\theta$ is only $\\lambda$. In a normal distribution, our $\\theta$ is often just $\\mu$ and $\\sigma$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### C. A review of Common Probability Distributions\n", "\n", "#### Discrete Distributions\n", "\n", "The random variable has a **probability mass function (pmf)** which measures the probability that our random variable will take a specific value $y$, denoted $P(Y=y)$.\n", "\n", "- **Bernoulli** (binary outcome, success has probability $\\theta$, $one$ trial):\n", "$\n", "P(Y=k) = \\theta^k(1-\\theta)^{1-k}\n", "$\n", "
\n", "- **Binomial** (binary outcome, success has probability $\\theta$, $n$ trials):\n", "\\begin{equation}\n", "P(Y=k) = {{n}\\choose{k}} \\cdot \\theta^k(1-\\theta)^{n-k}\n", "\\end{equation}\n", "\n", "*Note*: Binomial(1,$p$) = Bernouli($p$)\n", "
\n", "- **Negative Binomial**\n", "
\n", "- **Poisson** (counts independent events occurring at a rate)\n", "\\begin{equation}\n", "P\\left( Y=y|\\lambda \\right) = \\frac{{e^{ - \\lambda } \\lambda ^y }}{{y!}}\n", "\\end{equation}\n", "y = 0,1,2,...\n", "
\n", "- **Discrete Uniform** \n", "
\n", "- **Categorical, or Multinulli** (random variables can take any of K possible categories, each having its own probability; this is a generalization of the Bernoulli distribution for a discrete variable with more than two possible outcomes, such as the roll of a die)\n", "
\n", "- **Dirichlet-multinomial** (a generalization of the beta distribution for many variables)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Continuous Distributions\n", "\n", "The random variable has a **probability density function (pdf)**.\n", "- **Uniform** (variable equally likely to be near each value in interval $(a,b)$)\n", "\\begin{equation}\n", "P(X = x) = \\frac{1}{b - a}\n", "\\end{equation}\n", "anywhere within the interval $(a, b)$, and zero elsewhere.\n", "
\n", "- **Normal** (a.k.a. Gaussian)\n", "\\begin{equation}\n", "X \\sim \\mathcal{N}(\\mu,\\,\\sigma^{2})\n", "\\end{equation} \n", "\n", " A Normal distribution can be parameterized either in terms of precision $\\tau$ or standard deviation ($\\sigma^{2}$. The link between the two is given by\n", "\\begin{equation}\n", "\\tau = \\frac{1}{\\sigma^{2}}\n", "\\end{equation}\n", " - Mean $\\mu$\n", " - Variance $\\frac{1}{\\tau}$ or $\\sigma^{2}$\n", " - Parameters: `mu: float`, `sigma: float` or `tau: float`\n", "
\n", "- **Beta** (variable ($\\theta$) taking on values in the interval $[0,1]$, and parametrized by two positive parameters, $\\alpha$ and $\\beta$ that control the shape of the distribution. \n", " \n", "*Note:*Beta is a good distribution to use for priors (beliefs) because its range is $[0,1]$ which is the natural range for a probability and because we can model a wide range of functions by changing the $\\alpha$ and $\\beta$ parameters.\n", "\n", "\\begin{equation}\n", "\\label{eq:beta} \n", "P(\\theta) = \\frac{1}{B(\\alpha, \\beta)} {\\theta}^{\\alpha - 1} (1 - \\theta)^{\\beta - 1} \\propto {\\theta}^{\\alpha - 1} (1 - \\theta)^{\\beta - 1}\n", "\\end{equation}\n", "\n", "\n", "where the normalisation constant, $B$, is a beta function of $\\alpha$ and $\\beta$,\n", "\n", "\n", "\\begin{equation}\n", "B(\\alpha, \\beta) = \\int_{t=0}^1 t^{\\alpha - 1} (1 - t)^{\\beta - 1} dt.\n", "\\end{equation}\n", "
\n", "- **Exponential**\n", "
\n", "- **Gamma**\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " #### Code Resources:\n", " - Statistical Distributions in numpy/scipy: [scipy.stats](https://docs.scipy.org/doc/scipy/reference/stats.html)\n", " - Statistical Distributions in pyMC3: [distributions in PyMC3](https://docs.pymc.io/api/distributions.html) (we will see those below)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
Exercise: Plot a Discrete variable
\n", "\n", "Change the value of $\\mu$ in the Poisson PMF and see how the plot changes. Remember that the y-axis in a discrete probability distribution shows the probability of the random variable having a specific value in the x-axis.\n", "\n", "\\begin{equation}\n", "P\\left( X=k \\right) = \\frac{{e^{ - \\mu } \\mu ^k }}{{k!}}\n", "\\end{equation}\n", "\n", "**stats.poisson.pmf(x, mu)** $\\mu$(mu) is our $\\theta$ in this case." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.style.use('seaborn-darkgrid')\n", "x = np.arange(0, 30)\n", "for m in [0.5, 3, 8]:\n", " pmf = stats.poisson.pmf(x, m)\n", " plt.plot(x, pmf, 'o', alpha=0.5, label='$\\mu$ = {}'.format(m))\n", "plt.xlabel('random variable', fontsize=12)\n", "plt.ylabel('probability', fontsize=12)\n", "plt.legend(loc=1)\n", "plt.ylim=(-0.1)\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# same for binomial\n", "plt.style.use('seaborn-darkgrid')\n", "x = np.arange(0, 22)\n", "ns = [10, 17]\n", "ps = [0.5, 0.7]\n", "for n, p in zip(ns, ps):\n", " pmf = stats.binom.pmf(x, n, p)\n", " plt.plot(x, pmf, 'o', alpha=0.5, label='n = {}, p = {}'.format(n, p))\n", "plt.xlabel('x', fontsize=14)\n", "plt.ylabel('f(x)', fontsize=14)\n", "plt.legend(loc=1)\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# discrete uniform\n", "plt.style.use('seaborn-darkgrid')\n", "ls = [0]\n", "us = [3] # watch out, this number can only be integer!\n", "for l, u in zip(ls, us):\n", " x = np.arange(l, u+1)\n", " pmf = [1.0 / (u - l + 1)] * len(x)\n", " plt.plot(x, pmf, '-o', label='lower = {}, upper = {}'.format(l, u))\n", "plt.xlabel('x', fontsize=12)\n", "plt.ylabel('probability P(x)', fontsize=12)\n", "plt.legend(loc=1)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
Exercise: Plot a continuous variable
\n", "\n", "Change the value of $\\mu$ in the Uniform PDF and see how the plot changes.\n", " \n", "Remember that the y-axis in a continuous probability distribution does not shows the actual probability of the random variable having a specific value in the x-axis because that probability is zero!. Instead, to see the probability that the variable is within a small margin we look at the integral below the curve of the PDF.\n", "\n", "The uniform is often used as a noninformative prior." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```\n", "Uniform - numpy.random.uniform(a=0.0, b=1.0, size)\n", "```\n", "\n", "$\\alpha$ and $\\beta$ are our parameters. `size` is how many tries to perform.\n", "Our $\\theta$ is basically the combination of the parameters a,b. We can also call it \n", "\\begin{equation}\n", "\\mu = (a+b)/2\n", "\\end{equation}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from scipy.stats import uniform\n", "\n", "r = uniform.rvs(size=1000)\n", "plt.plot(r, uniform.pdf(r),'r-', lw=5, alpha=0.6, label='uniform pdf')\n", "plt.hist(r, density=True, histtype='stepfilled', alpha=0.2)\n", "plt.ylabel(r'probability density')\n", "plt.xlabel(f'random variable')\n", "plt.legend(loc='best', frameon=False)\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from scipy.stats import beta\n", "\n", "alphas = [0.5, 1.5, 3.0]\n", "betas = [0.5, 1.5, 3.0]\n", "x = np.linspace(0, 1, 1000) \n", "colors = ['red', 'green', 'blue']\n", "\n", "fig, ax = plt.subplots(figsize=(8, 5))\n", "\n", "for a, b, colors in zip(alphas, betas, colors):\n", " dist = beta(a, b)\n", " plt.plot(x, dist.pdf(x), c=colors,\n", " label=f'a={a}, b={b}')\n", "\n", "ax.set_ylim(0, 3)\n", "\n", "ax.set_xlabel(r'$\\theta$')\n", "ax.set_ylabel(r'$p(\\theta|\\alpha,\\beta)$')\n", "ax.set_title('Beta Distribution')\n", "\n", "ax.legend(loc='best')\n", "fig.show();" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.style.use('seaborn-darkgrid')\n", "x = np.linspace(-5, 5, 1000)\n", "mus = [0., 0., 0., -2.]\n", "sigmas = [0.4, 1., 2., 0.4]\n", "for mu, sigma in zip(mus, sigmas):\n", " pdf = stats.norm.pdf(x, mu, sigma)\n", " plt.plot(x, pdf, label=r'$\\mu$ = '+ f'{mu},' + r'$\\sigma$ = ' + f'{sigma}') \n", "plt.xlabel('random variable', fontsize=12)\n", "plt.ylabel('probability density', fontsize=12)\n", "plt.legend(loc=1)\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.style.use('seaborn-darkgrid')\n", "x = np.linspace(-5, 5, 1000)\n", "mus = [0., 0., 0., -2.] # mean\n", "sigmas = [0.4, 1., 2., 0.4] # std\n", "for mu, sigma in zip(mus, sigmas):\n", " plt.plot(x, uniform.pdf(x, mu, sigma), lw=5, alpha=0.4, \\\n", " label=r'$\\mu$ = '+ f'{mu},' + r'$\\sigma$ = ' + f'{sigma}')\n", "plt.xlabel('random variable', fontsize=12)\n", "plt.ylabel('probability density', fontsize=12)\n", "plt.legend(loc=1)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### D. Is this a Fair Coin?\n", "\n", "We do not want to promote gambling but let's say you visit the casino in **Monte Carlo**. You want to test your theory that casinos are dubious places where coins have been manipulated to have a larger probability for tails. So you will try to estimate how fair a coin is based on 100 flips.
\n", "You begin by flipping the coin. You get either Heads ($H$) or Tails ($T$) as our observed data and want to see if your posterior probabilities change as you obtain more data, that is, more coin flips. A nice way to visualize this is to plot the posterior probabilities as we observe more flips (data). " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will be using Bayes rule. $\\textbf{D}$ is our data.\n", "\n", "\\begin{equation}\n", "\\label{eq:bayes} \n", "P(\\theta|\\textbf{D}) = \\frac{P(\\textbf{D} |\\theta) P(\\theta) }{P(\\textbf{D})} \n", "\\end{equation}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the case of a coin toss when we observe $k$ heads in $n$ tosses:\n", "\\begin{equation}\n", "\\label{eq:bayes} \n", "P(\\theta|\\textbf{k}) = Beta(\\alpha + \\textbf{k}, \\beta + n - \\textbf{k}) \n", "\\end{equation}\n", "\n", "we can say that $\\alpha$ and $\\beta$ play the roles of a \"prior number of heads\" and \"prior number of tails\"." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# play with the priors - here we manually set them but we could be sampling from a separate Beta\n", "trials = np.array([0, 1, 3, 5, 10, 15, 20, 100, 200, 300])\n", "heads = np.array([0, 1, 2, 4, 8, 10, 10, 50, 180, 150])\n", "x = np.linspace(0, 1, 100)\n", "\n", "# for simplicity we set a,b=1\n", "\n", "plt.figure(figsize=(10,8))\n", "for k, N in enumerate(trials):\n", " sx = plt.subplot(len(trials)/2, 2, k+1)\n", " posterior = stats.beta.pdf(x, 1 + heads[k], 1 + trials[k] - heads[k]) \n", " plt.plot(x, posterior, alpha = 0.5, label=f'{trials[k]} tosses\\n {heads[k]} heads');\n", " plt.fill_between(x, 0, posterior, color=\"#348ABD\", alpha=0.4) \n", " plt.legend(loc='upper left', fontsize=10)\n", " plt.legend()\n", " plt.autoscale(tight=True)\n", " \n", "plt.suptitle(\"Posterior probabilities for coin flips\", fontsize=15);\n", "plt.tight_layout()\n", "plt.subplots_adjust(top=0.88)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " [Top](#top)\n", "\n", "## 2. Introduction to `pyMC3`\n", " \n", "PyMC3 is a Python library for programming Bayesian analysis, and more specifically, data creation, model definition, model fitting, and posterior analysis. It uses the concept of a `model` which contains assigned parametric statistical distributions to unknown quantities in the model. Within models we define random variables and their distributions. A distribution requires at least a `name` argument, and other `parameters` that define it. You may also use the `logp()` method in the model to build the model log-likelihood function. We define and fit the model.\n", "\n", "PyMC3 includes a comprehensive set of pre-defined statistical distributions that can be used as model building blocks. Although they are not meant to be used outside of a `model`, you can invoke them by using the prefix `pm`, as in `pm.Normal`. \n", "\n", "#### Markov Chain Monte Carlo (MCMC) Simulations\n", "\n", "PyMC3 uses the **No-U-Turn Sampler (NUTS)** and the **Random Walk Metropolis**, two Markov chain Monte Carlo (MCMC) algorithms for sampling in posterior space. Monte Carlo gets into the name because when we sample in posterior space, we choose our next move via a pseudo-random process. NUTS is a sophisticated algorithm that can handle a large number of unknown (albeit continuous) variables." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with pm.Model() as model:\n", " z = pm.Normal('z', mu=0., sigma=5.) \n", " x = pm.Normal('x', mu=z, sigma=1., observed=5.) \n", "print(x.logp({'z': 2.5})) \n", "print(z.random(10, 100)[:10]) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**References**:\n", "\n", "- *Salvatier J, Wiecki TV, Fonnesbeck C. 2016. Probabilistic programming in Python using PyMC3. PeerJ Computer Science 2:e55* [(https://doi.org/10.7717/peerj-cs.55)](https://doi.org/10.7717/peerj-cs.55)\n", "- [Distributions in PyMC3](https://docs.pymc.io/api/distributions.html)\n", "- [More Details on Distributions](https://docs.pymc.io/developer_guide.html)\n", "\n", "Information about PyMC3 functions including descriptions of distributions, sampling methods, and other functions, is available via the `help` command." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#help(pm.Poisson)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " [Top](#top)\n", "\n", "## 3. Bayesian Linear Regression" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's say we want to predict outcomes Y as normally distributed observations with an expected value $mu$ that is a linear function of two predictor variables, $\\bf{x}_1$ and $\\bf{x}_2$.\n", "\n", "\\begin{equation}\n", "\\mu = \\alpha + \\beta_1 \\bf{x}_1 + \\beta_2 x_2 \n", "\\end{equation}\n", "\n", "\\begin{equation}\n", "Y \\sim \\mathcal{N}(\\mu,\\,\\sigma^{2})\n", "\\end{equation} \n", "\n", "where $\\sigma^2$ represents the measurement error. \n", "\n", "In this example, we will use $\\sigma^2 = 10$\n", "\n", "We also choose the parameters as normal distributions:\n", "\n", "\\begin{eqnarray}\n", "\\alpha \\sim \\mathcal{N}(0,\\,10) \\\\\n", "\\beta_i \\sim \\mathcal{N}(0,\\,10) \\\\\n", "\\sigma^2 \\sim |\\mathcal{N}(0,\\,10)|\n", "\\end{eqnarray} \n", "\n", "We will artificially create the data to predict on. We will then see if our model predicts them correctly." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Initialize random number generator\n", "np.random.seed(123)\n", "\n", "# True parameter values\n", "alpha, sigma = 1, 1\n", "beta = [1, 2.5]\n", "\n", "# Size of dataset\n", "size = 100\n", "\n", "# Predictor variable\n", "X1 = np.linspace(0, 1, size)\n", "X2 = np.linspace(0,.2, size)\n", "\n", "# Simulate outcome variable\n", "Y = alpha + beta[0]*X1 + beta[1]*X2 + np.random.randn(size)*sigma\n", "\n", "fig, ax = plt.subplots(1,2, figsize=(10,6), sharex=True)\n", "ax[0].scatter(X1,Y)\n", "ax[1].scatter(X2,Y)\n", "ax[0].set_xlabel(r'$x_1$', fontsize=14) \n", "ax[0].set_ylabel(r'$Y$', fontsize=14)\n", "ax[1].set_xlabel(r'$x_2$', fontsize=14) \n", "ax[1].set_ylabel(r'$Y$', fontsize=14)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from pymc3 import Model, Normal, HalfNormal\n", "\n", "basic_model = Model()\n", "\n", "with basic_model:\n", "\n", " # Priors for unknown model parameters, specifically create stochastic random variables \n", " # with Normal prior distributions for the regression coefficients,\n", " # and a half-normal distribution for the standard deviation of the observations, σ.\n", " alpha = Normal('alpha', mu=0, sd=10)\n", " beta = Normal('beta', mu=0, sd=10, shape=2)\n", " sigma = HalfNormal('sigma', sd=1)\n", "\n", " # Expected value of outcome - posterior\n", " mu = alpha + beta[0]*X1 + beta[1]*X2\n", "\n", " # Likelihood (sampling distribution) of observations\n", " Y_obs = Normal('Y_obs', mu=mu, sd=sigma, observed=Y)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# model fitting with sampling\n", "from pymc3 import NUTS, sample, find_MAP\n", "from scipy import optimize\n", "\n", "with basic_model:\n", "\n", " # obtain starting values via MAP\n", " start = find_MAP(fmin=optimize.fmin_powell)\n", "\n", " # instantiate sampler\n", " step = NUTS(scaling=start)\n", "\n", " # draw 2000 posterior samples\n", " trace = sample(2000, step, start=start)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from pymc3 import traceplot\n", "\n", "traceplot(trace);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "results = pm.summary(trace, \n", " var_names=['alpha', 'beta', 'sigma'])\n", "results" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This linear regression example is from the original paper on PyMC3: *Salvatier J, Wiecki TV, Fonnesbeck C. 2016. Probabilistic programming in Python using PyMC3. PeerJ Computer Science 2:e55 https://doi.org/10.7717/peerj-cs.55*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " [Top](#top)\n", "\n", "## 4. Try this at Home: Example on Mining Disasters\n", "We will go over the classical `mining disasters from 1851 to 1962` dataset. \n", "\n", "This example is from the [pyMC3 Docs](https://docs.pymc.io/notebooks/getting_started.html)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "disaster_data = pd.Series([4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6,\n", " 3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5,\n", " 2, 2, 3, 4, 2, 1, 3, np.nan, 2, 1, 1, 1, 1, 3, 0, 0,\n", " 1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1,\n", " 0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2,\n", " 3, 3, 1, np.nan, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4,\n", " 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1])\n", "fontsize = 12\n", "years = np.arange(1851, 1962)\n", "plt.figure(figsize=(10,5))\n", "#plt.scatter(years, disaster_data); \n", "plt.bar(years, disaster_data)\n", "plt.ylabel('Disaster count', size=fontsize)\n", "plt.xlabel('Year', size=fontsize);\n", "plt.title('Was there a Turning Point in Mining disasters from 1851 to 1962?', size=15);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Building the model\n", "\n", "**Step1:** We choose the probability model for our experiment. Occurrences of disasters in the time series is thought to follow a **Poisson** process with a large **rate** parameter in the early part of the time series, and from one with a smaller **rate** in the later part. We are interested in locating the change point in the series, which perhaps is related to changes in mining safety regulations. \n", "\n", "```\n", "disasters = pm.Poisson('disasters', rate, observed=disaster_data)\n", "```\n", "\n", "We have two rates, `early_rate` if $t<=s$, and `late_rate` if $t>s$, where $s$ is the year the switch was made (a.k.a. the `switchpoint`). \n", "\n", "**Step2:** Choose a prior distributions of the two rates, what we believe the rates were before we observed the data, and the switchpoint. We choose Exponential.\n", "```\n", "early_rate = pm.Exponential('early_rate', 1)\n", "```\n", "\n", "The parameters of this model are: \n", "\n", "\n", "**Note:** Watch for missing values. Missing values are handled transparently by passing a MaskedArray or a pandas.DataFrame. Behind the scenes, another random variable, disasters.missing_values is created to model the missing values. If you pass a np.array with missing values you will get an error." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with pm.Model() as disaster_model:\n", "\n", " # discrete\n", " switchpoint = pm.DiscreteUniform('switchpoint', lower=years.min(), upper=years.max(), testval=1900)\n", "\n", " # Priors for pre- and post-switch rates number of disasters\n", " early_rate = pm.Exponential('early_rate', 1)\n", " late_rate = pm.Exponential('late_rate', 1)\n", "\n", " # our theta - allocate appropriate Poisson rates to years before and after current\n", " # switch is an `if` statement in puMC3\n", " rate = pm.math.switch(switchpoint >= years, early_rate, late_rate)\n", "\n", " # our observed data as a likelihood function of the `rate` parameters\n", " # shows how we think our data is distributed\n", " disasters = pm.Poisson('disasters', rate, observed=disaster_data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Model Fitting" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# there are defaults but we can also more explicitly set the sampling algorithms\n", "with disaster_model:\n", " \n", " # for continuous variables\n", " step1 = pm.NUTS([early_rate, late_rate])\n", " \n", " # for discrete variables\n", " step2 = pm.Metropolis([switchpoint, disasters.missing_values[0]] )\n", "\n", " trace = pm.sample(10000, step=[step1, step2])\n", " # try different number of samples\n", " #trace = pm.sample(5000, step=[step1, step2])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Posterior Analysis" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "On the left side plots we notice that our early rate is between 2.5 and 3.5 disasters a year. In the late period it seems to be between 0.6 and 1.2 so definitely lower.\n", "\n", "The right side plots show the samples we drew to come to our conclusion." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "pm.traceplot(trace, ['early_rate', 'late_rate', 'switchpoint'], figsize=(20,10));" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "results = pm.summary(trace, \n", " var_names=['early_rate', 'late_rate', 'switchpoint'])\n", "results" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.4" } }, "nbformat": 4, "nbformat_minor": 1 }