1 / 19

A (poor) Gibbs Sampling Approach to Logistic Regression

Kyle Bogdan Grant Brown. A (poor) Gibbs Sampling Approach to Logistic Regression. Data. Simulated based on known values of parameters (one covariate, ‘dose’). ‘rats’ given different dosages of imaginary chemical, 4 dose groups with 25 rats in each group.

herb
Download Presentation

A (poor) Gibbs Sampling Approach to Logistic Regression

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Kyle Bogdan Grant Brown A (poor) Gibbs Sampling Approach to Logistic Regression

  2. Data • Simulated based on known values of parameters (one covariate, ‘dose’). • ‘rats’ given different dosages of imaginary chemical, 4 dose groups with 25 rats in each group. • Data generated three times under different parameters, three chains used for each data set.

  3. Gibbs Sampling For Logistic Data? • Traditionally, binomial likelihood, prior on logit. • Full Conditionals have no coherent form. • Attractive, however, because it eliminates the need to reject iterations

  4. Algorithm • Groenewald and Mokgatlhe, 2005 • Create Uniform Latent Variables Based on Y[i,j] = 0, 1 • Draws from joint posterior of Betas and U[i,j] • pi[i] = p(uniform(01) <= logit-1(Beta*x[i])) • Written in R, refined in Python • Very inefficient • Draw new parameter for each Y[i,j] at each iteration

  5. Implementation • Three datasets • Three chains per set • 1 Million iterations per chain • Last 500k iterations sent to CODA • 9m total iterations, 4.5 m analyzed

  6. Initial Problems

  7. Sampler Output/Diagostics

  8. Sampler Output/Diagnostics

  9. Sampler Output/Diagnostics

  10. Sampler Output/Diagnostics

  11. Sampler Output/Diagnostics

  12. WinBUGS Model • Y[i,j]’s given binomial (instead of Bernoulli) likelihood • Betas regressed on logit of proportion • Locally uniform priors on beta1 and beta2

  13. WinBUGS Model model{ for (i in 1:N){ r[i] ~ dbin(p[i], n[i]);logit(p[i]) <- (beta1 + beta2*(x[i] - mean(x[]))); r.hat[i] <- (p[i] * n[i]);} beta1 ~ dflat(); beta2 ~ dflat(); beta1nocenter <- beta1 - beta2*mean(x[]);}

  14. WinBUGS Output: Beta0 (1,0)

  15. WinBUGS Output: Beta0 (1,1)

  16. WinBUGS Output: Beta0 (1,-2)

  17. Comparison

  18. WinBUGS Wins • Uses proportions instead of Individual Y[i,j]’s • Convergence is Better • WinBUGS appears more precise (more trials needed) • Also, much faster.

  19. Resources • Groenewald, Pieter C.N., and Lucky Mokgatlhe. "Bayesian computation for logistic regression.“ Computational Statistics & Data Analysis 48 (2005): 857-68. Science Direct. Elsevier. Web. <http://www.sciencedirect.com/>. • Professor Cowles

More Related