extras from programming lecture and exercise solutions
Download
Skip this Video
Download Presentation
Extras From Programming Lecture … And exercise solutions

Loading in 2 Seconds...

play fullscreen
1 / 4

Extras From Programming Lecture … And exercise solutions - PowerPoint PPT Presentation


  • 69 Views
  • Uploaded on

Extras From Programming Lecture … And exercise solutions. Church / Venture Comparison. http://forestdb.org/models /. http:// probcomp.csail.mit.edu /venture/. v.clear () v.assume (\' get_mu \',\'(normal 0 1)\') v.assume (\' get_x \',\'(lambda () (normal get_mu 1))\') v.observe (\'( get_x )\',\'5.0\')

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Extras From Programming Lecture … And exercise solutions' - mercer


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
church venture comparison
Church / Venture Comparison

http://forestdb.org/models/

http://probcomp.csail.mit.edu/venture/

v.clear()

v.assume(\'get_mu\',\'(normal 0 1)\')

v.assume(\'get_x\',\'(lambda () (normal get_mu 1))\')

v.observe(\'(get_x)\',\'5.0\')

v.observe(\'(get_x)\',\'6.0\')

mu_samples=posterior_samples(\'get_mu\',no_samples=400,int_mh=200)

true_e_mu=3.7; true_sd_mu = .58 # true value (analytically computed)

diff=abs(np.mean(mu_samples) - true_e_mu)

print \'true E(mu / D)=%.2f; estimated =%.3f\' % (true_e_mu, np.mean(mu_samples))

assert diff < .5 ,\'difference > .5\'

x=np.arange(1,6,.1)

y=sp.norm.pdf(x,loc=true_e_mu,scale=true_sd_mu)

plt.plot(x,y)

plt.hist(mu_samples,bins=15,normed=True)

plt.title(\'Histograpm of Posterior samples of Mu vs. True Posterior on Mu\')

plt.xlabel(\'Mu\'); plt.ylabel(\'P(mu / data)\')

or

[assume get_mu (normal 0 1)]

[assume get_x (lambda () (normal get_mu 1))]

[observe (get_x) 5.0]

[observe (get_x) 6.0]

[predict get_mu]

[infer (mh default one 1)]

[predict get_mu]

[infer]

[predict get_mu]

[infer (rejection default all) ]

….

(define observed-data

\'(4.18 5.36 7.54 2.47 8.83 6.21 5.22 6.41))

(define num-observations (length observed-data))

(define samples

(mh-query 10 100

; defines

(define mean (gaussian 0 10))

(define var (abs (gaussian 0 5)))

(define sample-gaussian (lambda () (gaussian mean var)))

; query expression

(list mean var)

; condition expression

(equal? observed-data

(repeat num-observations sample-gaussian))))

samples

Stuhlmüller

Mansinghka

limitations
Limitations
  • General
    • Still small models and data only
      • DARPA PPAML / Venture / Probabilistic-C / probabilistic-js
    • Little documentation
    • Buggy implementations
  • Philosophical
    • Not all machine learning models and techniques are naturally generative
      • Markov Random Fields / Factor Graphs
  • Anglican
    • Forcing outermost observe to be an ERP can be programmatically cumbersome
workflow
Workflow

Traditional

Probabilistic Programming

Repeat

Code generative model

Use

Find

Find bugs in model

Inference doesn’t work

  • Repeat
    • Define model
    • Derive inference updates
      • MCMC
        • Conditionals
      • Variational
        • Fixed point updates
    • Code inference algorithm
    • Test
    • Find bugs In code
    • Use
    • Find
      • Find bugs in model
      • Inference doesn’t work
ad