1 / 19

# A (poor) Gibbs Sampling Approach to Logistic Regression - PowerPoint PPT Presentation

Kyle Bogdan Grant Brown. A (poor) Gibbs Sampling Approach to Logistic Regression. Data. Simulated based on known values of parameters (one covariate, ‘dose’). ‘rats’ given different dosages of imaginary chemical, 4 dose groups with 25 rats in each group.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

## PowerPoint Slideshow about ' A (poor) Gibbs Sampling Approach to Logistic Regression' - herb

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

Grant Brown

### A (poor) Gibbs Sampling Approach to Logistic Regression

• Simulated based on known values of parameters (one covariate, ‘dose’).

• ‘rats’ given different dosages of imaginary chemical, 4 dose groups with 25 rats in each group.

• Data generated three times under different parameters, three chains used for each data set.

• Traditionally, binomial likelihood, prior on logit.

• Full Conditionals have no coherent form.

• Attractive, however, because it eliminates the need to reject iterations

• Groenewald and Mokgatlhe, 2005

• Create Uniform Latent Variables Based on Y[i,j] = 0, 1

• Draws from joint posterior of Betas and U[i,j]

• pi[i] = p(uniform(01) <= logit-1(Beta*x[i]))

• Written in R, refined in Python

• Very inefficient

• Draw new parameter for each Y[i,j] at each iteration

• Three datasets

• Three chains per set

• 1 Million iterations per chain

• Last 500k iterations sent to CODA

• 9m total iterations, 4.5 m analyzed

• Y[i,j]’s given binomial (instead of Bernoulli) likelihood

• Betas regressed on logit of proportion

• Locally uniform priors on beta1 and beta2

model{ for (i in 1:N){ r[i] ~ dbin(p[i], n[i]);logit(p[i]) <- (beta1 + beta2*(x[i] - mean(x[]))); r.hat[i] <- (p[i] * n[i]);} beta1 ~ dflat(); beta2 ~ dflat(); beta1nocenter <- beta1 - beta2*mean(x[]);}

• Uses proportions instead of Individual Y[i,j]’s

• Convergence is Better

• WinBUGS appears more precise (more trials needed)

• Also, much faster.

• Groenewald, Pieter C.N., and Lucky Mokgatlhe. "Bayesian computation for logistic regression.“ Computational Statistics & Data Analysis 48 (2005): 857-68. Science Direct. Elsevier. Web. <http://www.sciencedirect.com/>.

• Professor Cowles