1 / 27

Anatole Gershman, Eugene Fink, Bin Fu , and Jaime G. Carbonell

Analysis of uncertain data: Evaluation of Given Hypotheses Selection of probes for information gathering. Anatole Gershman, Eugene Fink, Bin Fu , and Jaime G. Carbonell. Analysis of uncertain data: Evaluation of Given Hypotheses Selection of probes for information gathering.

race
Download Presentation

Anatole Gershman, Eugene Fink, Bin Fu , and Jaime G. Carbonell

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Analysis of uncertain data:Evaluation of Given HypothesesSelection of probes for information gathering Anatole Gershman, Eugene Fink, Bin Fu, and Jaime G. Carbonell

  2. Analysis of uncertain data:Evaluation of Given HypothesesSelection of probes for information gathering Anatole Gershman, Eugene Fink, Bin Fu, and Jaime G. Carbonell

  3. 0.4 0.6 Example The analyst has to distinguish between two hypotheses: Retires JoinsVikings

  4. Example Observations: Without the tearful public ceremony that accompanied his retirement announcement from the Green Bay Packers just 11 months ago, quarterback Brett Favre has told the New York Jets he is retiring. Minnesota coach Brad Childress, jilted at the altar Tuesday afternoon by Brett Farve telling him he wasn’t going to play for the Vikings in 2009. According to many rumors, quarterback Brett Favre has closed on the purchase of a home in Eden Prairie, MN, where the Minnesota Vikings' team facility is located.

  5. Example Observation distributions: Without the tearful public ceremony that accompanied his retirement announcement from the Green Bay Packers just 11 months ago, quarterback Brett Favre has told the New York Jets he is retiring. P(says retire |Retires) = 0.9 P(says retire |Joins Vikings) = 0.6 Bayesian induction: P(Retire|says retire) = P (Retire) ∙ P(says retire|Retire) / (P (Retire) ∙ P(says retire|Retire) + P (Joins Vikings) ∙ P(Joins Vikings|Retire)) = 0.5

  6. General problem We have to distinguish among n mutually exclusive hypotheses, denoted H1, H2,…, Hn. 0.4 0.6 For every hypothesis, we know its prior; thus, we have an array of n of priors, P(H1), P(H2), P(Hn)

  7. General problem We base the analysis on mobservable features, denoted OBS1, OBS2, …, OBSm. Each observation is a variable that takes one of several discrete values. For every observation, OBSa, we know the number of its possible values, num[a]. Thus, we have num[1..m] with the number of values for each observation. For every hypothesis, we know the related probability distribution of each observation. P(oa,j | Hi) represents the probabilities of possible values of OBSa. I will RETIRE! OBS1 0.9 0.4 I won’t RETIRE! 0.1 0.6 num[1] = 2 We know a specific value of each observation val [1..m].

  8. General problem We have to evaluate the posterior probabilities of the n given hypotheses, denoted Post(H1), Post(H2), Post(Hn) 0.5 0.5

  9. Extension #1 Prior: 0.6 0.35 0.05 Something else H0(“surprise”)

  10. Extension #1 After discovering val, Posterior probability of H0: Post(H0) = P(H0) ∙ P(val | H0) / P(val) = P(H0) ∙ P(val | H0) / (P(H0) ∙P(val | H0) + likelihood(val)). Bad news: We do not know P(val | H0). Good news: Post(H0) monotonically depends on P(val | H0); thus, if we obtain lower and upper bounds for P(val | H0), we also get bounds for Post(H0).

  11. Plausibility principle Unlikely events normally do not happen; thus, if we have observed val, then its likelihood must not be too small. Plausibility threshold: We use a global constant plaus, which must be between 0.0 and 1.0. If we have observed val, we assume that P(val) ≥plaus / num. We use it to obtains bounds for P(val | H0), : Lower: (plaus / num − likelihood(val)) / prior[0]. Upper: 1.0.

  12. Plausibility principle We use it to obtains bounds for P(val | H0): Lower: (plaus / num − likelihood(val)) / P(H0) . Upper: 1.0. We substitute these bounds into the dependency of Post(H0) on P(val | H0), , thus obtaining the bounds for Post(H0): Lower: 1.0 − likelihood(val) ∙num / plaus. Upper: P(H0) / (P(H0) + likelihood(val)). We have derived bounds for the probability that none of the given hypotheses is correct.

  13. Extension #2 Multiple observations: Which one(s) to Use? Bayesian analysis: Use theirjointdistribution? Difficult to get. Independence assumption: usually does not work. We identify the highest-utility observation and do not useother observations to corroborate it.

  14. Extension #2 Utility Function 0.5 0.5 0.4 0.6 0.35 0.65 Which one is “better”?

  15. Extension #2 Utility Function 0.5 0.5 0.4 0.6 0.35 0.65 Shannon’s Entropy (negation)

  16. Extension #2 Utility Function 0.5 0.5 0.4 0.6 0.35 0.65 KL-divergence

  17. Extension #2 Utility Function 0.5 0.5 0.4 0.6 0.35 0.65 Self-defined function

  18. Analysis of uncertain data:Evaluation of Given HypothesesSelection of probes for information gathering Anatole Gershman, Eugene Fink, Bin Fu, and Jaime G. Carbonell

  19. 0.4 0.6 Example The analyst has to distinguish between two hypotheses: Retires JoinsVikings

  20. Example I will RETIRE! 0.4 0.5 0.6 0.5 Ask Probe: Execute external action and observe its response, to gather more information.

  21. Example Probe: Gain (utility function) Probe Cost Observation Probability

  22. Probe Selection single-obs-gain(probej) = visible[i, a, j] · (likelihood(1) · probe-gain(1) + … + likelihood(num[a]) · probe-gain(num[a])) + (1.0 − visible[i, a, j]) · cost[j] Utility Function Observation Probability Probe Cost gain(probj) = max (single-obs-gain(probj, obs1),…, single-obs-gain(probj, obsm))

  23. Experiment Task: Evaluating hypothesizes (H1, H2, H3, H4). No Probe, Accuracy of distinguishing between H1 and other hypotheses

  24. Experiment Task: Evaluating hypothesizes (H1, H2, H3, H4). Probe Selection to distinguish H1 and other hypotheses

  25. Experiment Task: Evaluating hypothesizes (H1, H2, H3, H4). Probe Selection to distinguish four hypotheses

  26. Summary Use Bayesian inference to distinguish among mutually exclusive hypotheses. • H0 hypothesis • Multiple observations Use Probe to gather more information for better analysis • Cost, Utility function, Observation Probability,...

  27. Thank you

More Related