1 / 9

Intro to Bayesian Learning Exercise Solutions

Intro to Bayesian Learning Exercise Solutions. Ata Kaban The University of Birmingham 2005. 1) In a casino, two differently loaded but identically looking dice are thrown in repeated runs. The frequencies of numbers observed in 40 rounds of play are as follows:

akasma
Download Presentation

Intro to Bayesian Learning Exercise Solutions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Intro to Bayesian LearningExercise Solutions Ata Kaban The University of Birmingham 2005

  2. 1) In a casino, two differently loaded but identically looking dice are thrown in repeated runs. The frequencies of numbers observed in 40 rounds of play are as follows: • Dice 1, [Nr, Frequency]: [1,5], [2,3], [3,10], [4,1], [5,10], [6,11] • Dice 2, [Nr, Frequency]: [1,10], [2,11], [3,4], [4,10], [5,3], [6,2] • Characterize the two dice by the corresponding random sequence model they generated. That is, estimate the parameters of the random sequence model for both dice. ANSWER Die 1, [Nr, P_1(Nr)]: [1, 0.125], [2,0.075], [3,0.250], [4,0.025], [5,0.250], [6,0.275] Die 2, [Nr, P_2(Nr)]: [1,0.250], [2,0.275], [3,0.100], [4,0.250], [5,0.075], [6,0.050]

  3. (ii) Some time later, one of the dice has disappeared. You (as the casino owner) need to find out which one. The remaining one is now thrown 40 times and here are the observed counts: [1,8], [2,12], [3,6], [4,9], [5,4], [6,1]. Use a Bayes’ rule to decide the identity of the remaining die. ANSWER Since we have a random sequence model (i.i.d. data) D, the probability of D under the two models is Since there is no prior knowledge about either dice, we use a flat prior, i.e. the same 0.5 for both hypotheses. Because P_1(D) < P_2(D), and the prior is the same for both hypothesies, we conclude that the die in question is the die no. 2.

  4. 2) A simple model for a data sequence is the random sequence model – i.e. that each symbol is generated independently from some distribution. A more complex model is a Markov model – i.e. the probability of a symbol at time t depends on the symbol observed on time t-1. Consider the following two sequences: (s1): A B B A B A A A B A A B B B (s2): B B B B B A A A A A B B B B • Further, consider the following two models: (M1): a random sequence model with parameters P(A)=0.4, P(B)=0.6 (M2): a first order Markov model with initial probabilities 0.5 for both symbols and the following transition matrix: P(A|A)=0.6, P(B|A)=0.4, P(A|B)=0.1, P(B|B)=0.9.  Which of s1 and s2 is more likely to have been generated from which of the models M1 and M2? Justify your answer both using intuitive arguments and also by using Bayes’ rule. (As there is no prior knowledge given here, then consider equal prior probabilities.)

  5. ANSWER: • Intuitively it can be observed that s2 contains more state repetitions, which is an evidence that indicates that the Markov structure of M2 is more likely than the random structure of M1. The sequence s1 in turn is apparently more random, therefore it is more likely generated from M1. • The probability of s1 under the models is the following:log P(s1|M1)=7*log(0.4)+7*log(0.6)=-9.9898log P(s1|M2)=0.5+3*log(0.6)+4*log(0.4)+3*log(0.1)+3*log(0.9)=-11.9215 • So s1 is more likely to have been generated from M1. • Similarly, for s2 we get:log P(s2|M1)=5*log(0.4)+9*log(0.6)= -9.1789log P(s2|M2)=0.5+4*log(0.6)+log(0.4)+log(0.1)+7*log(0.9)=-5.4997 • So s2 is more likely to have been generated from M2.

  6. 3) You are to be tested for a disease that has prevalence in the population of 1 in 1000. The lab test used is not always perfect: It has a false-positive rate of 1%. [A false-positive result is when the test is positive, although the disease is not present.] The false negative rate of the test is zero. [A false negative is when the test result is negative while in fact the disease is present.] • a) If you are tested and you get a positive result, what is the probability that you actually have the disease? • b) Under the conditions in the previous question, is it more probable that you have the disease or that you don’t? • c) Would the answers to a) and / or b) differ if you use a maximum likelihood versus a maximum a posteriori hypothesis estimation method? Comment on your answer.

  7. ANSWER a) We have two binary variables, A and B. A is the outcome of the test, B is the presence/absence of the disease. We need to compute P(B=1|A=1). We use Bayes theorem: Now the required quantities are known from the problem. These are the following: • P(A=1|B=1)=1, i.e. true positives • P(B=1)=1/1000, i.e. prevalence • P(A=1|B=0)=0.01, i.e. false positives • P(B=0)=1-1/1000 Replacing, we have:

  8. b) Under the conditions in the previous question, is it more probable that you have the disease or that you don’t? ANSWER: P(B=0|A=1)=1-P(B=1|A=1)=1-0.09099. So clearly it is more probable that the disease is not present.

  9. c) Would the answers to a) and / or b) differ if you use a maximum likelihood versus a maximum a posteriori hypothesis estimation method? Comment on your answer. ANSWER: • ML maximises P(D|h) w.r.t. h, whereas MAP maximises P(h|D). So MAP includes prior knowledge about the hypothesis, as P(h|D) is in fact proportional to P(D|h)*P(h). This is a good example where the importance and influence of prior knowledge is evident. • The answer at b) is based on the maximum a posteriori estimate, as we have included prior knowledge in the form of prevalence of the disease. If that would not been taken into account, i.e. both P(B=1)=0.5 and P(B=0)=0.5 is considered than the hypothesis estimate would be the maximum likelihood one. In that case the presence of the disease would come out be more probable than the absence of it. This is an example of how prior knowledge can influence the Bayesian decisions.

More Related