1 / 19

Binomial and Related Distributions

Binomial and Related Distributions. 學生 : 黃柏舜 學號 : 102581010 授課老師 : 蔡章仁. Binomial and Related Distributions.

Download Presentation

Binomial and Related Distributions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Binomial and Related Distributions 學生 : 黃柏舜 學號 :102581010 授課老師 : 蔡章仁

  2. Binomial and Related Distributions In this section of the website, we explore the binomial distribution and, in particular, how to do hypothesis testing using the binomial distribution. We also explain the relationship between the binomial and normal distributions, as well as some related distributions, namely the proportion, negative binomial, geometric, hypergeometric, beta, multinomial and Poisson distributions.

  3. Binomial Distribution Definition 1: Suppose an experiment has the following characteristics: • the experiment consists ofnindependent trials, each with two mutually exclusive outcomes (successandfailure) • for each trial the probability of success isp(and so the probability of failure is 1 – p) Each such trial is called aBernoulli trial. Letxbe the discrete random variable whose value is the number of successes inntrials. Then the probability distribution function forxis called thebinomial distribution,B(n, p),and is defined as follows: whereC(n, x) =andn! =n(n–1)(n–2)⋯3∙2∙1 as described inCombinatorial Functions. C(n, x) can be calculated by using the Excel function COMBIN(n,x).

  4. Binomial Distribution Observation: Figure 1 shows a graph of the probability density function for B(10, .25). Figure 1 Binomial distribution That the graph looks a lot like the normal distribution is not a coincidence, as we will see shortly. Property 1:

  5. Binomial Distribution Excel Function: Excel provides the following functions regarding the binomial distribution: BINOMDIST(x, n, p, cum) where n = the number of trials, p = the probability of success for each trial and cum takes the values TRUE or FALSE. BINOMDIST(x, n, p,  FALSE) = probability density function f(x) value at x for the binomial distribution B(n, p), i.e. the probability that there are x successes in n trials where the probability of success on any trial is p. BINOMDIST(x, n, p, TRUE) = cumulative probability distribution F(x) value at x for the binomial distribution B(n, p), i.e. the probability that there are at most x successes in n trials where the probability of success on any trial is p.

  6. Binomial Distribution Example 1: What is the probability that if you throw a die 10 times it will come up six 4 times? We can model this problem using the binomial distribution B(10, 1/6) as follows Alternatively the problem can be solved using the Excel function: BINOMDIST(4, 10, 1/6, FALSE) = 0.054266

  7. Hypothesis Testing for Binomial Distribution Example 1: Suppose you have a die and you suspect that it is biased towards the number 3, and so run an experiment in which you throw the die 10 times and count that the die comes up 4 times with the number 3. Determine whether the die is biased. The population random variable x = the number of times 3 occurs in 10 trials has a binomial distribution B(10, π) where π is the population parameter corresponding to the probability of success on any trial. We define the following null hypothesis. H0: π ≤ 1/6; i.e. the die is not biased towards the number 3H1: π > 1/6 Now setting α = 0.05 P(x ≤ 4) = BINOMDIST(4, 10, 1/6, TRUE) = 0.984538 > 0.95 = 1 – α. And so we reject the null hypothesis with 95% level of confidence.

  8. Hypothesis Testing for Binomial Distribution Example 2: We suspect that a coin is biased towards heads. When we toss the coin 9 times, how many heads need to come up before we are confident that the coin is biased towards heads? We use the following null hypothesis: H0: π ≤ .5 H1: π > .5 Using a confidence factor of 95% (i.e. α = .05), we calculate CRITBINOM(n, p, 1–α) = CRITBINOM(9, .5, .95) = 7 And so 7 is the critical value. If 7 or more heads come up then we are 95% confident that the coin is biased towards heads, and so can reject the null hypothesis. Note that BINOMDIST(6, 9, .5, TRUE) = .9102 < .95, while BINOMDIST(7, 9, .5, TRUE) = .9804 ≥ .95.

  9. Relationship between Binomial and Normal Distributions Theorem 1: If x is a random variable with distribution B(n, p), then for sufficiently largen, the distribution of the variable where Corollary 1: Provided n is large enough, N(μ,σ) is a good approximation for B(n, p) where μ = np and σ2 = np (1 – p). Observation: The normal distribution is a good approximation for the binomial distribution when np ≥ 5 and n(1 – p) ≥ 5. Another way to look at this, is that the normal distribution is a good approximation for the binomial distribution when n > 10 and .4 < p < .6, or n > 30 and .1 < p < .9.

  10. Example 1: What is the normal distribution approximation for the binomial distribution where n = 20 and p = .25 (i.e. the binomial distribution displayed in Figure 1 of Binomial Distribution)? As in Corollary 1, define the following parameters: Since np = 5 ≥ 5 and n(1 – p) = 15 ≥ 5, based on Corollary 1 we can conclude that B(20,.25) ~ N(5,1.94). We now show the graph of both pdf’s to see visibly how close these distributions are: Figure 1 – Binomial vs. normal distribution

  11. Proportion Distribution Definition 1: If x is a random variable with binomial distribution B(n, p) then the random variable y = x/n is said to have the proportion distribution. Property 1: Where y has a proportion distribution as defined above Proof: By Property 1b of Expectation and Property 1a of Binomial Distribution By Property 3b of Expectation Theorem 1: Provided n is large enough – generally when  np ≥ 5 and n(1–p) ≥ 5 – then N(μy,σy) is a good approximation for the proportion distribution for y with

  12. Hypothesis Testing – one sample From the theorem, we know that when sufficiently large samples of size n are taken, the distribution of sample proportions is approximately normal, distributed around the true population proportion π, with standard deviation (i.e. the standard error) We can use this fact to do hypothesis testing as was done for the normal distribution. In addition when a two-tailed test is performed a confidence interval can be calculated where where we use the sample mean p as an estimate for the population mean when calculating the standard error. This introduces additional error, which is acceptable for large values of n.

  13. Example 1: A company believes that 50% of their customers are women. A sample of 600 customers is chosen and 325 of them are women. Is this significantly different from their belief? Method 1: Using the binomial distribution, we reject the null hypothesis since: Method 2: By Theorem 1 we can also use the normal distribution The observed mean is 325/600 = 0.541667   And so we reach the same conclusion, namely to reject the null hypothesis.

  14. Example 2: A survey of 1,100 voters showed that 53% are in favor of the new tax reform. Can we conclude that the majority of voters (from the population) are in favor? NORMDIST(.53, .5, 0.01505, TRUE) = .976889 > .95, and so we can reject the null hypothesis and conclude with 95% confidence that the population will vote in favor of the tax reform. We determine the 95% confidence interval as follows: zcrit = NORMSINV(1 – α/2) = NORMSINV(0.975) = 1.96 And so the 95% confidence interval is

  15. And so we conclude with 95% confidence that between 50.1% and 55.9% of the population will be in favor. If however we are looking for a 99% confidence interval then zcrit = NORMSINV(1 – α/2) = NORMSINV(0.995) = 2.58 And so the 99% confidence interval is This means that with 99% confidence, between 49.1% and 56.9% of the population will be in favor.

  16. Hypothesis Testing – two samples Theorem 2: Let x1 be a proportional distribution with mean π1and number of trials n1 and let x2 be a proportional distribution with mean π2 and number of trials n2. When the number of trials n1 and n2 are sufficiently large, usually when niπi ≥ 5 and ni(1–πi) ≥ 5, the difference between the sample proportions p1 – p2 will be approximately normal with mean π1 – π2 and standard deviation Proof: Based on Theorem 2 of the Binomial Distribution, xi has approximately the distribution Since x1 and x2 are independently distributed, by the linear transfer property of the normal distribution, x1 – x2 has distribution

  17. Example 4: A company which manufactures long-lasting light bulbs sells halogen and compact florescent bulbs. They ran an experiment in which they ran 100 halogen and 100 florescent bulbs continuously for 250 days. After 250 days they found that half of the halogen bulbs were still working while 60% of the florescent bulbs were still operating. Is there a significant difference between the two types of bulbs? Let x1 = the percentage of halogen bulbs that are functional after 200 days and x2 = the percentage of florescent bulbs that are functional after 250 days. The presumption is that the distributions for each of these are proportional. We now test the following null hypothesis: H0: π1 = π2 Assuming the null hypothesis is true, then based on the null hypothesis by Theorem 2, x1 – x2 will be approximately normal with mean π1 – π2 = 0 and standard deviation

  18. where the common value of the mean is denoted π and both samples are of size n. Since the value for π is unknown, we estimate its value from the sample, namely, 50 + 60 = 110 successes out of 200, i.e. π ≈ 0.55, Thus, the mean of x1 – x2 is 0 (based on the null hypothesis) and the standard deviation is approximately . The observed value of x1 – x2 is .60 – .50 =.10, and so we have (two-tail test): p-value = NORMDIST(.1, 0, .0497, TRUE) = .978 > .975 = 1 – α/2 Thus, we reject the null hypothesis, and conclude there is a significant difference between the two types of bulbs. We reach the same conclusion since critical value of x1 – x2 = NORMINV(.975,0,.0497) = .0975 < .1 = observed value of x1 – x2.

  19. Thank you for listening !

More Related