Kyle bogdan grant brown
This presentation is the property of its rightful owner.
Sponsored Links
1 / 19

A (poor) Gibbs Sampling Approach to Logistic Regression PowerPoint PPT Presentation


  • 98 Views
  • Uploaded on
  • Presentation posted in: General

Kyle Bogdan Grant Brown. A (poor) Gibbs Sampling Approach to Logistic Regression. Data. Simulated based on known values of parameters (one covariate, ‘dose’). ‘rats’ given different dosages of imaginary chemical, 4 dose groups with 25 rats in each group.

Download Presentation

A (poor) Gibbs Sampling Approach to Logistic Regression

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Kyle bogdan grant brown

Kyle Bogdan

Grant Brown

A (poor) Gibbs Sampling Approach to Logistic Regression


A poor gibbs sampling approach to logistic regression

Data

  • Simulated based on known values of parameters (one covariate, ‘dose’).

  • ‘rats’ given different dosages of imaginary chemical, 4 dose groups with 25 rats in each group.

  • Data generated three times under different parameters, three chains used for each data set.


Gibbs sampling for logistic data

Gibbs Sampling For Logistic Data?

  • Traditionally, binomial likelihood, prior on logit.

  • Full Conditionals have no coherent form.

  • Attractive, however, because it eliminates the need to reject iterations


Algorithm

Algorithm

  • Groenewald and Mokgatlhe, 2005

  • Create Uniform Latent Variables Based on Y[i,j] = 0, 1

  • Draws from joint posterior of Betas and U[i,j]

    • pi[i] = p(uniform(01) <= logit-1(Beta*x[i]))

  • Written in R, refined in Python

  • Very inefficient

    • Draw new parameter for each Y[i,j] at each iteration


Implementation

Implementation

  • Three datasets

  • Three chains per set

  • 1 Million iterations per chain

  • Last 500k iterations sent to CODA

  • 9m total iterations, 4.5 m analyzed


Initial problems

Initial Problems


Sampler output diagostics

Sampler Output/Diagostics


Sampler output diagnostics

Sampler Output/Diagnostics


Sampler output diagnostics1

Sampler Output/Diagnostics


Sampler output diagnostics2

Sampler Output/Diagnostics


Sampler output diagnostics3

Sampler Output/Diagnostics


Winbugs model

WinBUGS Model

  • Y[i,j]’s given binomial (instead of Bernoulli) likelihood

  • Betas regressed on logit of proportion

  • Locally uniform priors on beta1 and beta2


Winbugs model1

WinBUGS Model

model{for (i in 1:N){ r[i] ~ dbin(p[i], n[i]);logit(p[i]) <- (beta1 + beta2*(x[i] - mean(x[]))); r.hat[i] <- (p[i] * n[i]);}beta1 ~ dflat();beta2 ~ dflat();beta1nocenter <- beta1 - beta2*mean(x[]);}


Winbugs output beta 0 1 0

WinBUGS Output: Beta0 (1,0)


Winbugs output beta 0 1 1

WinBUGS Output: Beta0 (1,1)


Winbugs output beta 0 1 2

WinBUGS Output: Beta0 (1,-2)


Comparison

Comparison


Winbugs wins

WinBUGS Wins

  • Uses proportions instead of Individual Y[i,j]’s

  • Convergence is Better

  • WinBUGS appears more precise (more trials needed)

  • Also, much faster.


Resources

Resources

  • Groenewald, Pieter C.N., and Lucky Mokgatlhe. "Bayesian computation for logistic regression.“ Computational Statistics & Data Analysis 48 (2005): 857-68. Science Direct. Elsevier. Web. <http://www.sciencedirect.com/>.

  • Professor Cowles


  • Login