1 / 119

Non-Standard-Datenbanken

Non-Standard-Datenbanken. Bayesian Networks, Inference , and Learning Prof. Dr. Ralf Möller Universität zu Lübeck Institut für Informationssysteme. Beispiel. Zahnarzt-Problem mit vier Variablen: Toothache (Sind besagte Schmerzen wirklich Zahnschmerzen?)

Download Presentation

Non-Standard-Datenbanken

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Non-Standard-Datenbanken Bayesian Networks, Inference, and Learning Prof. Dr. Ralf Möller Universität zu Lübeck Institut für Informationssysteme

  2. Beispiel Zahnarzt-Problem mit vier Variablen: • Toothache (Sind besagte Schmerzen wirklich Zahnschmerzen?) • Cavity (Es könnte ein Loch sein?) • Catch (Stahlinstrument erzeugt Testschmerz?) • Weather (Wetter: sunny,rainy,cloudy,snow) Nachfolgende Präsentationen enthalten Material aus Kapitel 14 (Sektion 1 and 2)

  3. Prior probability • Prior or unconditionalprobabilities of propositions e.g., P(Cavity = true) = 0.1 and P(Weather = sunny) = 0.72 correspond to belief prior to arrival of any (new) evidence • Probability distributiongives values for all possible assignments: P(Weather) = <0.72, 0.1, 0.08, 0.1> (normalized, i.e., sums to 1 because one condition must be the case)

  4. Full joint probability distribution • Joint probability distribution for a set of random variables gives the probability of every atomic event on those random variables P(Weather,Cavity) is a 4 × 2 matrix of values: Weather = sunny rainy cloudy snow Cavity = true 0.144 0.02 0.016 0.02 Cavity = false 0.576 0.08 0.064 0.08 • Full joint probability distribution: all random variables involved • P(Toothache, Catch, Cavity, Weather) • Every query about a domain can be answered by the full joint distribution

  5. Discrete random variables: Notation • Dom(Weather) = {sunny, rainy, cloudy, snow} and Dom(Weather) disjoint from domain of other random variables: • Atomic event Weather=rainy often written as rainy • Example: P(rainy), the random variable Weather is implicitly defined by the value rainy • Boolean variable Cavity • Atomic event Cavity=true written as cavity • Atomic event Cavity=false written as cavity • Examples: P(cavity) or P(cavity)

  6. Conditional probability • Conditional or posterior probabilities e.g., P(cavity | toothache) = 0.8 or: <0.8> i.e., given that toothache is all I know • (Notation for conditional distributions: P(Cavity | Toothache) is a 2-element vector of 2-element vectors • If we know more, e.g., cavityis also given, then we have P(cavity | toothache, cavity) = 1 • New evidence may be irrelevant, allowing simplification, e.g., P(cavity | toothache, sunny) = P(cavity | toothache) = 0.8 • This kind of inference, sanctioned by domain knowledge, is crucial

  7. Conditional probability • Definition of conditional probability (in terms of uncond. prob.): P(a | b) = P(a  b) / P(b) if P(b) > 0 • Product rule gives an alternative formulation ( is commutative): P(a  b) = P(a | b) P(b) = P(b | a) P(a) • A general version holds for whole distributions, e.g., P(Weather,Cavity) = P(Weather | Cavity) P(Cavity) View as a set of 4 × 2 equations, not matrix mult.(1,1)P(Weather=sunny |Cavity=true) P(Cavity=true) (1,2)P(Weather=sunny |Cavity=false) P(Cavity=false),…. • Chain rule is derived by successive application of product rule: P(X1, …,Xn) = P(X1,...,Xn-1) P(Xn | X1,...,Xn-1) = P(X1,...,Xn-2) P(Xn-1 | X1,...,Xn-2) P(Xn | X1,...,Xn-1) = … = P(Xi | X1, … ,Xi-1)

  8. Inference by enumeration • Start with the joint probability distribution: • For any proposition φ, sum the probabilitywhere it is true: P(φ) = Σω:ω╞φ P(ω) • P(toothache) = 0.108 + 0.012 + 0.016 + 0.064 = 0.2 • Unconditional or marginal probability of toothache • Process is called marginalization or summing out • Ax3: P(A ∪ B) = P(A) + P(B) fordisjointeventsA, B ⊆ Ω

  9. Marginalization and conditioning • Let Y, Z be sequences of random variables s.th. Y  Z denotes all random variables describing the world • Marginalization • P(Y) = Σz ∈ ZP(Y,z) • Conditioning • P(Y) = Σz ∈ ZP(Y|z)P(z)

  10. Inference by enumeration • Start with the joint probability distribution: For any proposition φ, sum the atomic events where it is true: P(φ) = Σω:ω⊨φ P(ω) • P(cavity toothache) = 0.108 + 0.012 + 0.072 + 0.008+ 0.016 + 0.064 = 0.28(P(cavity toothache) = P(cavity) + P(toothache) – P(cavity toothache))

  11. Inference by enumeration • Start with the joint probability distribution: • Can also compute conditional probabilities: P(cavity | toothache) = P(cavity  toothache) P(toothache) = 0.016+0.064 0.108 + 0.012 + 0.016 + 0.064 = 0.08/0.2 = 0.4P(cavity | toothache) = (0.108+0.012)/0.2 = 0.6 Product rule

  12. Normalization • Denominator P(z) (or P(toothache) in the example before) can be viewed as a normalization constant α P(Cavity | toothache) = α P(Cavity,toothache) = α [P(Cavity,toothache,catch) + P(Cavity,toothache, catch)] = α [<0.108,0.016> + <0.012,0.064>] = α <0.12,0.08> = <0.6,0.4> General idea: compute distribution on query variable by fixing evidence variables (Toothache) and summing over hidden variables (Catch)

  13. Inference by enumeration, contd. Typically, we are interested in the posterior joint distribution of the query variablesY given specific values e for the evidence variablesE (X are all variables of the modeled world) Let the hidden variables be H = X - Y – E then the required summation of joint entries is done by summing out the hidden variables: P(Y | E = e) = αP(Y,E = e) = αΣhP(Y,E= e, H = h) • The terms in the summation are joint entries because Y, E and H together exhaust the set of random variables (X) • Obvious problems: • Worst-case time complexity O(dn) where d is the largest arity and n denotes the number of random variables • Space complexity O(dn) to store the joint distribution • How to find the numbers for O(dn) entries?

  14. Independence • A and B are independent iff P(A|B) = P(A) or P(B|A) = P(B) or P(A, B) = P(A) P(B) P(Toothache, Catch, Cavity, Weather) = P(Toothache, Catch, Cavity) P(Weather) • 32 entries reduced to 12; • Absolute independence powerful but rare • Dentistry is a large field with hundreds of variables, none of which are independent. What to do?

  15. Conditional independence • P(Toothache, Cavity, Catch) has 23 – 1 = 7 independent entries • If I have a cavity, the probability that the probe catches in it doesn't depend on whether I have a toothache: (1) P(catch | toothache, cavity) = P(catch | cavity) • The same independence holds if I haven't got a cavity: (2) P(catch | toothache,cavity) = P(catch | cavity) • Catch is conditionally independent of Toothache given Cavity: P(Catch | Toothache,Cavity) = P(Catch | Cavity) • Equivalent statements: P(Toothache | Catch, Cavity) = P(Toothache | Cavity) P(Toothache, Catch | Cavity) = P(Toothache | Cavity) P(Catch | Cavity)

  16. Conditional independence contd. • Write out full joint distribution using chain rule: P(Toothache, Catch, Cavity) = P(Toothache | Catch, Cavity) P(Catch, Cavity) = P(Toothache | Catch, Cavity) P(Catch | Cavity) P(Cavity) conditional independence = P(Toothache | Cavity) P(Catch | Cavity) P(Cavity) i.e., 2 + 2 + 1 = 5 independent numbers • In most cases, the use of conditional independence reduces the size of the representation of the joint distribution from exponential in n to linear in n • Conditional independence is our most basic and robust form of knowledge about uncertain environments

  17. Naïve Bayes Model Usually, the assumption that effects are independent is wrong, but works well in practice

  18. Bayesian networks • A simple, graphical notation for conditional independence assertions and hence for compact specification of full joint distributions • Syntax: • a set of nodes, one per variable • a directed, acyclic graph (link ≈ "directly influences") • a conditional distribution for each node given its parents: P (Xi | Parents (Xi)) • In the simplest case, conditional distribution represented as a conditional probability table (CPT) giving the distribution over Xi for each combination of parent values

  19. Example • Topology of network encodes conditional independence assertions: • Weather is independent of the other variables • Toothache and Catch are conditionally independent given Cavity

  20. Example • I'm at work, neighbor John calls to say my alarm is ringing, but neighbor Mary doesn't call. Sometimes it's set off by minor earthquakes. Is there a burglary? • Variables: Burglary, Earthquake, Alarm, JohnCalls, MaryCalls • Network topology reflects "causal" knowledge: • A burglar can set the alarm off • An earthquake can set the alarm off • The alarm can cause Mary to call • The alarm can cause John to call

  21. Example contd.

  22. Compactness of Full Joint Encoding Due to Network Sparseness • A CPT for Boolean Xi with k Boolean parents has 2k rows for the combinations of parent values • Each row requires one number p for Xi = true(the number for Xi = false is just 1-p) • If each variable has no more than k parents, the complete network requires n · 2k numbers • i.e., grows linearly with n, vs. 2nfor the full joint distribution • For burglary net, 1 + 1 + 4 + 2 + 2 = 10 numbers (vs. 25-1 = 31)

  23. Semantics The full joint distribution is defined as the product of the local conditional distributions: P (X1, … ,Xn) = πi = 1P (Xi | Parents(Xi)) e.g., P(jm a be) = P (j | a) P (m | a) P (a | b, e) P (b) P (e) = 0.90x0.7x0.001x0.999x0.998 0.00063 n

  24. Noisy-OR-Representation A B A B 0.8 0.1 Meaning C C A B c f f 0.0 f t 0.1 t f 0.8 t t 0.1+0.8 - 0.1*0.8 Substantial reduction of the probabilities to be stored

  25. what must betruetoallowfor a certain (desired) conclusion?

  26. (Calculationovertreesum-productevaluationtree withpathlengthn (=numberof RVs) anddegree d (=maximal numberdomainelements) )

  27. Evaluation Tree Can do betterwithVariable Elimination (VE)

  28. Basic Objects of VE P(M|A) fM(A) • Track objects called factors (right-to-left, in tree: bottom up) • Initial factors are local CPTs • During elimination create new factors • Form of Dynamic Programming

  29. Basic Operations: Pointwise Product • Pointwise product of factors f1and f2 • forexample: f1(A,B) * f2(B,C)= f(A,B,C) • in general:f1(X1,...,Xj,Y1,…,Yk) *f2(Y1,…,Yk,Z1,…,Zl)= f(X1,...,Xj,Y1,…,Yk,Z1,…,Zl) • has 2j+k+l entries (if all variables are binary)

  30. Joinbypointwiseproduct

  31. Basic Operations: Summing out • Summing out a variable from a product of factors • Move any constant factors outside the summation • Add up submatrices in pointwise product of remaining factors𝛴x f1* …*fk = f1*…*fi*𝛴x fi+1*…*fk = f1*…*fi* fXassuming f1, …, fido not depend on X

  32. Summing out Summing out a

  33. VE resultforBurglaryExample

  34. Further Optimization by Finding Irrelevant Variables {Alarm, Earthquake, Burglary} Irrelevant variables canbeidentifiedby a graphtheoreticalcriterion on theassociatedmoralgraphbym-separation Manyotherinferencealgorithmsandoptimizationsknown Discussed in detail in course on Bayesian Knowledge Representation

  35. Acknowledgements Slidesfrom AIMA bookprovidedby Cristina Conati, UBC

  36. Data Mining Bayesian Networks • Full Bayesian Learning • MAP learning • Maximum Likelihood Learning • Learning Bayesian Networks • Fully observable • With hidden (unobservable) variables

  37. Full Bayesian Learning • In the learning methods we have seen so far, the idea was always to find the best model that could explain some observations • In contrast, full Bayesian learning sees learning as Bayesian updating of a probability distribution over the hypothesis space, given data • H is the hypothesis variable • Possible hypotheses (values of H) h1…, hn • P(H) = prior probability distribution over hypothesis space • jth observation dj gives the outcome of random variable Dj • training data d= d1,..,dk

  38. Non-Standard-Datenbanken Bayesian Networks, Inference, and Learning Prof. Dr. Ralf Möller Universität zu Lübeck Institut für Informationssysteme

  39. Example contd.

  40. Example • Suppose we have 5 types of candy bags • 10% are 100% cherry candies (h100) • 20% are 75% cherry + 25% lime candies (h75) • 40% are 50% cherry + 50% lime candies (h50) • 20% are 25% cherry + 75% lime candies (h25) • 10% are 100% lime candies (h0) • Then we observe candies drawn from some bag • Let‘s call 𝜃 the parameter that defines the fraction of cherry candy in a bag, and h𝜃the corresponding hypothesis • Which of the five kinds of bag has generated my 10 observations? P(h 𝜃 |d). • What flavour will the next candy be? Prediction P(X|d)

  41. Full Bayesian Learning • Given the data so far, each hypothesis hi has a posterior probability: • P(hi |d) = αP(d| hi) P(hi) (Bayes theorem) • where P(d| hi) is called the likelihood of the data under each hypothesis • Predictions over a new entity X are a weighted average over the prediction of each hypothesis: • P(X|d) = = ∑iP(X, hi |d) = ∑iP(X| hi,d) P(hi |d) = ∑iP(X| hi) P(hi |d) ~ ∑iP(X| hi) P(d| hi) P(hi) • The weights are given by the data likelihood and prior of each h + No need to pick one best-guess hypothesis! - Need to iterate over all hypotheses The data does not add anything to a prediction given a hypothesis

  42. Example • If we re-wrap each candy and return it to the bag, our 10 observations are independent and identically distributed, i.i.d, so • P(d| h𝜃) = ∏j P(dj| h𝜃) for j=1,..,10 • For a given h𝜃 , the value of P(dj| h𝜃) is • P(dj = cherry| h𝜃) = 𝜃; P(dj = lime|h𝜃) = (1-𝜃) • And given N observations, of which c are cherry and l = N-c lime • Binomial distribution: probability of # of successes in a sequence of N independent trials with binary outcome, each of which yields success with probability 𝜃. • For instance, after observing 3 lime candies in a row: • P([lime, lime, lime] | h 50) = 0.53 because the probability of seeing lime for each observation is 0.5 under this hypotheses

  43. All-limes: Posterior Probability of H P(hi |d) = αP(d| hi) P(hi) • Initially, the hp with higher priors dominate (h50 with prior = 0.4) • As data comes in, the finally best hypothesis (h0) starts dominating, as the probability of seeing this data given the other hypotheses gets increasingly smaller • After seeing three lime candies in a row, the probability that the bag is the all-lime one starts taking off

  44. Prediction Probability ∑i P(next candy is lime| hi) P(hi |d) • The probability that the next candy is lime increases with the probability that the bag is an all-lime one

  45. Overview • Full Bayesian Learning • MAP learning • Maximum Likelihood Learning • Learning Bayesian Networks • Fully observable • With hidden (unobservable) variables

  46. MAP approximation • Full Bayesian learning seems like a very safe bet, but unfortunately it does not work well in practice • Summing over the hypothesis space is often intractable (e.g., 18,446,744,073,709,551,616 Boolean functions of 6 attributes) • Very common approximation: Maximum a posterior (MAP) learning: • Instead of doing prediction by considering all possible hypotheses , as in • P(X|d) = ∑iP(X| hi) P(hi |d) • Make predictions based on hMAP that maximises P(hi |d) • I.e., maximize P(d| hi) P(hi) • P(X|d) ≈ P(X| hMAP)

  47. MAP approximation • MAP is a good approximation when P(X |d) ≈ P(X| hMAP) • In our example, hMAPis the all-lime bag after only 3 candies, predicting that the next candy will be lime with p =1 • The Bayesian learner gave a prediction of 0.8, safer after seeing only 3 candies

  48. Bias • As more data arrive, MAP and Bayesian prediction become closer, as MAP’s competing hypotheses become less likely • Often easier to find MAP (optimization problem) than deal with a large summation problem • P(H) plays an important role in both MAP and Full Bayesian Learning (defines learning bias) • Used to define a tradeoff between model complexity and its ability to fit the data • More complex models can explain the data better => higher P(d| hi) danger of overfitting • But they are less likely a priory because there are more of them than simpler model => lower P(hi) • I.e. common learning bias is to penalize complexity

  49. Overview • Full Bayesian Learning • MAP learning • Maximum Likelihood Learning • Learning Bayesian Networks • Fully observable • With hidden (unobservable) variables

  50. Maximum Likelihood (ML)Learning • Further simplification over full Bayesian and MAP learning • Assume uniform priors over the space of hypotheses • MAP learning (maximize P(d| hi) P(hi)) reduces to maximize P(d| hi) • When is ML appropriate? Howard, R.: Decisionanalysis: Perspectives on inference, decision, andexperimentation. Proceedings of the IEEE 58(5), 632-643, 1970

More Related