class: center, middle, inverse, title-slide # Markov Chain Monte Carlo II ## PSYC 573 ### University of Southern California ### February 17, 2022 --- exclude: TRUE # Mid-Semester Feedback Thanks for sharing your thoughts! - `\(n\)` = 10 -- What has been useful? - R command, exercises -- Struggles/Suggestion? - Download Rmd from website - Math concepts - More R code (especially related to the homework) --- exclude: TRUE class: clear Questions? - Psychology examples - Posterior predictive check - Beta likelihood; law of total probability (from conditional to marginal) - HW 3 ... -- Changes - Speed up a bit - Zoom - More time in R --- class: clear The original Metropolis (random walk) algorithm allows posterior sampling, without the need to solving the integral -- However, it is inefficient, especially in high dimension problems (i.e., many parameters) --- class: inverse, middle, center # Data Example Taking notes with a pen or a keyboard? --- class: clear ### Mueller & Oppenheimer (2014, Psych Science) .panelset[ .panel[.panel-name[R code] ```r # Use haven::read_sav() to import SPSS data nt_dat <- haven::read_sav("https://osf.io/qrs5y/download") ``` ] .panel[.panel-name[Data] .font70[ 0 = laptop, 1 = longhand | condition| wordcount| objectiveZ| openZ| |---------:|---------:|----------:|------:| | 0| 572| -1.175| 0.111| | 0| 226| 0.317| -0.864| | 0| 255| -1.175| -0.376| | 0| 298| 1.063| 1.086| | 1| 203| -1.921| -0.376| | 1| 127| -0.056| 0.111| | 1| 258| 0.690| 0.111| | 1| 152| -0.429| 0.599| ] ] .panel[.panel-name[Histograms] <img src="gibbs_files/figure-html/hist-wordcount-1.png" width="70%" style="display: block; margin: auto;" /> ] ] --- class: clear > Do people write more or less words when asked to use longhand? ### Normal model Consider only the laptop group first `$$\text{wc_laptop}_i \sim N(\mu, \sigma^2)$$` Two parameters: `\(\mu\)` (mean), `\(\sigma^2\)` (variance) --- class: inverse, middle, center # Gibbs Sampling --- class: clear Gibbs sampling is efficient by generating smart proposed values, using **conjugate** or **semiconjugate** priors Implemented in software like BUGS and JAGS -- Useful when: - Joint posterior is intractable, but the **conditional distributions** are known --- exclude: TRUE # Conditional Distributions
--- class: clear Another example .pull-left[ <img src="gibbs_files/figure-html/mnorm_persp-1.png" width="95%" style="display: block; margin: auto;" /> ] .pull-right[ <img src="gibbs_files/figure-html/condprob-1-1.png" width="95%" style="display: block; margin: auto;" /> ] --- class: clear .font70[ ### Conjugate priors for conditional distributions `\begin{align} \mu & \sim N(\mu_0, \tau_0^2) \\ \sigma^2 & \sim \text{Inv-Gamma}(\nu_0 / 2, \nu_0 \sigma^2_0 / 2) \end{align}` - `\(\mu_0\)`: Prior mean, `\(\tau_0^2\)`; Prior variance (i.e., uncertainty) of the mean - `\(\nu_0\)`: Prior sample size for the variance; `\(\sigma^2_0\)`: Prior expectation of the variance ### Posterior `\begin{align} \mu \mid \sigma^2, y & \sim N(\mu_n, \tau_n^2) \\ \sigma^2 \mid \mu & \sim \text{Inv-Gamma}(\nu_n / 2, \nu_n \sigma^2_n [\mu] / 2) \end{align}` - `\(\tau_n^2 = \left(\frac{1}{\tau_0^2} + \frac{n}{\sigma^2}\right)^{-1}\)`; `\(\mu_n = \tau_n^2 \left(\frac{\mu_0}{\tau_0^2} + \frac{n \bar y}{\sigma^2}\right)\)` - `\(\nu_n = \nu_0 + n\)`; `\(\sigma^2_n (\mu) = \frac{1}{\nu_n} \left[\nu_0 \sigma^2_0 + (n - 1) s^2_y + \sum (\bar y - \mu)^2\right]\)` ] --- class: clear No need for a separate proposal distribution; directly sample the conditional posterior - Thus, all draws are accepted -- ### Posterior Summary 2 chains, 10,000 draws each, half warm-ups `\(\mu_0\)` = 5, `\(\sigma^2_0\)` = 1, `\(\tau^2_0\)` = 100, `\(\nu_0\)` = 1 .font70[ |variable | mean| median| sd| mad| q5| q95| rhat| ess_bulk| ess_tail| |:--------|----:|------:|-----:|-----:|-----:|----:|----:|--------:|--------:| |mu | 3.1| 3.10| 0.213| 0.211| 2.753| 3.45| 1| 9928| 9936| |sigma2 | 1.4| 1.33| 0.378| 0.337| 0.904| 2.11| 1| 10189| 10136| ] The ESS is almost as large as # of draws