1 / 62

Minimax Pathology

Minimax Pathology. Mitja Luštrek 1 , Ivan Bratko 2 and Matjaž Gams 1 1 Jožef Stefan Institute, Department of Intelligent Systems 2 University of Ljubljana, Faculty of Computer and Information Science. Plan of the talk. What is the minimax pathology Past work on the pathology

dayton
Download Presentation

Minimax Pathology

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Minimax Pathology Mitja Luštrek 1, Ivan Bratko 2 and Matjaž Gams 1 1 Jožef Stefan Institute, Department of Intelligent Systems 2 University of Ljubljana, Faculty of Computer and Information Science

  2. Plan of the talk • What is the minimax pathology • Past work on the pathology • A real-valued minimax model • Why is minimax not pathological • Why is minimax beneficial Mitja Luštrek

  3. What is the minimax pathology • Past work on the pathology • A real-valued minimax model • Why is minimax not pathological • Why is minimax beneficial Mitja Luštrek

  4. What is the minimax pathology • Conventional wisdom: • the deeper one searches a game tree, the better he plays; • no shortage of practical confirmation. • Theoretical analyses: • minimaxing amplifies the error of the heuristic evaluation function; • therefore the deeper one searches, the worse he plays; • Pathology! Mitja Luštrek

  5. The pathology illustrated Current position Game tree Final values (true) Mitja Luštrek

  6. The pathology illustrated Current position Static heuristic values (with error) Final values (true) Mitja Luštrek

  7. The pathology illustrated Current position Backed-up heuristic values (should be more trustworthy, but have larger error instead!) Minimax Static heuristic values (with error) Final values (true) Mitja Luštrek

  8. The pathology illustrated Current position Static heuristic values (with smaller error) Final values (true) Mitja Luštrek

  9. What is the minimax pathology • Past work on the pathology • A real-valued minimax model • Why is minimax not pathological • Why is minimax beneficial Mitja Luštrek

  10. The discovery • First discovered by Nau [1979]. • A year later discovered independently by Beal [1980]. • Beal’s minimax model: • uniform branching factor; • position values are losses or wins; • the proportion of losses for the side to move is constant; • position values within a level are independent of each other; • the error is the probability of mistaking a loss for a win or vice versa and is independent of the level of a position. • None of the assumptions look terribly unrealistic, yet the pathology is there. Mitja Luštrek

  11. Attempts at an explanation • Researchers tried to find a flaw in Beal’s model by attacking its assumptions. • Uniform branching factor: • geometrically distributed branching factor prevents the pathology [Michon, 1983]; • in chess endgames asymmetrical branching factor causes the pathology [Sadikov, 2005]. • Node values are losses or wins: • multiple values do not help [Bratko & Gams, 1982; Pearl, 1983]; • multiple values used in a game, which is pathological [Nau, 1982, 1983]; • multiple/real values used to construct a realistic model, which is not pathological [Scheucher & Kaindl, 1998; Luštrek, 2004]. Mitja Luštrek

  12. Attempts at an explanation • The proportion of losses for the side to move is constant: • in models where it is applicable, it was agreed to be necessary [Beal, 1982; Bratko & Gams, 1982; Nau, 1982, 1983]. • Node values within a level are independent of each other: • nearby positions are similar and thus have similar values; • most researchers agreed that this is the answer or at least a part of it [Beal, 1982; Bratko & Gams, 1982; Pearl, 1983; Nau, 1982, 1983; Schrüfer, 1986; Scheucher & Kaindl, 1998; Luštrek, 2004]. Mitja Luštrek

  13. Attempts at an explanation • The error is independent of the level of a position: • varying error cannot account for the absence of the pathology [Pearl, 1983]; • used varying error in a game and it did not help [Nau, 1982, 1983]; • varying error is a part of the answer (with the other part being node-value dependence) [Scheucher & Kaindl, 1998]. • Despite some disagreement, node-value dependence seems to be the most widely supported explanation. • But is it really necessary? Is there no simpler, more fundamental explanation?We believe there is! Mitja Luštrek

  14. What is the minimax pathology • Past work on the pathology • A real-valued minimax model • Why is minimax not pathological • Why is minimax beneficial Mitja Luštrek

  15. Why multiple/real values? • Necessary in games where the final outcome is multivalued (Othello, tarok). • Used by humans and game-playing programs. • Seem unnecessary in games where the outcome is a loss, a win or perhaps a draw (chess, checkers). • But: • in a losing position against a fallible and unknown opponent, the outcome is uncertain; • in a winning position, a perfect two-valued evaluation function will not lose, but it may never win, either. • Multiple values are required to model uncertainty and to maintain a direction of play towards an eventual win. Mitja Luštrek

  16. A real-valued minimax model • Aims to be a real-valued version of Beal’s model: • uniform branching factor; • position values are real numbers; • if the real values are converted to losses and wins, the proportion of losses for the side to move is constant; • position values within a level are independent of each other; • the error is normally distributed noise and is independent of the level of a position. • The crucial difference is the assumption 5. Mitja Luštrek

  17. Assumption 5 • Two-value error: • Real-value error: - + Loss Win 0.31 0.74 Mitja Luštrek

  18. Assumption 5 Beal’s assumption 5: Static P (loss ↔ win) constant with the depth of search. Our assumption 5: The magnitude of static real-value noise constant with the depth of search. P (loss ↔ win) Real-value noise Depth Depth Note: static = applied at the lowest level of search. Mitja Luštrek

  19. Building of a game tree Mitja Luštrek

  20. Building of a game tree True values distributed uniformly in [0, 1] Mitja Luštrek

  21. Building of a game tree True values backed up Mitja Luštrek

  22. Building of a game tree True values backed up Mitja Luštrek

  23. Building of a game tree True values backed up Mitja Luštrek

  24. Building of a game tree True values backed up Mitja Luštrek

  25. Building of a game tree Search to this depth Mitja Luštrek

  26. Building of a game tree Heuristic values = true values + normally distributed noise Mitja Luštrek

  27. Building of a game tree Heuristic values backed up Mitja Luštrek

  28. Building of a game tree Heuristic values backed up Mitja Luštrek

  29. Building of a game tree Heuristic values backed up Mitja Luštrek

  30. What we do with our model • Monte Carlo experiments: • generate 10,000 sets of true values; • generate 10 sets of heuristic values per set of true values per depth of search. • Measure the error at the root: • real-value error = the average difference between the true value and the heuristic value; • two-value error = the frequency of mistaking a loss for a win or vice versa. • Compare the error at the root when searching to different depths. Mitja Luštrek

  31. Conversion of real values to losses and wins • To measure two-value error, real values must be converted to losses and wins. • Value above a threshold means win, below the threshold loss. • At the leaves: • the proportion of losses for the side to move = cb (because it must be the same at all levels); • real values distributed uniformly in [0, 1]; • therefore threshold = cb. • At higher levels: • minimaxing on real values is equivalent to minimaxing on two values; • therefore also threshold = cb. Mitja Luštrek

  32. Conversion of real values to losses and wins Real values Two values Mitja Luštrek

  33. Conversion of real values to losses and wins Real values Two values Minimaxing Mitja Luštrek

  34. Conversion of real values to losses and wins Real values Two values Minimaxing Mitja Luštrek

  35. Conversion of real values to losses and wins Real values Two values Apply threshold Mitja Luštrek

  36. Conversion of real values to losses and wins Real values Two values Mitja Luštrek

  37. Conversion of real values to losses and wins Real values Two values Mitja Luštrek

  38. Conversion of real values to losses and wins Real values Two values Apply threshold Mitja Luštrek

  39. Conversion of real values to losses and wins Real values Two values Minimaxing Mitja Luštrek

  40. Conversion of real values to losses and wins Real values Two values Minimaxing Mitja Luštrek

  41. Conversion of real values to losses and wins Real values Two values Mitja Luštrek

  42. What is the minimax pathology • Past work on the pathology • A real-valued minimax model • Why is minimax not pathological • Why is minimax beneficial Mitja Luštrek

  43. Error at the root / constant static real-value error • Plotted: real-value and two-value error at the root. • Static real-value error: normally distributed noise with standard deviation 0.1. Mitja Luštrek

  44. Static two-value error / constant static real-value error • Plotted: static two-value error. • Static real-value error: normally distributed noise with standard deviation 0.1. Mitja Luštrek

  45. Static real-value error / constant static two-value error • Plotted: static real-value error. • Static two-value error: 0.1. Mitja Luštrek

  46. Error at the root / constant static two-value error • Plotted: two-value error at the root in our real-value model and in Beal’s model. • Two-value error at the lowest level of search: 0.1. • After a small tweak of Beal’s model, we get a perfect match. Mitja Luštrek

  47. Conclusions from the graphs • Static real-value is constant: • static two-value error decreases with the depth of search; • no pathology. • Static two-value error is constant: • static real-value error increases with the depth of search; • pathology. • Which static error should be constant? Mitja Luštrek

  48. Should real- or two-value static error be constant? • Already explained why real values are necessary. • Real-value error most naturally represent the fallibility of the heuristic evaluation function. • Game playing programs do not use two-valued evaluation functions, but if they did: • they would more often make a mistake in uncertain positions close to the threshold; • they would rarely make a mistake in certain positions far from the threshold. Mitja Luštrek

  49. Should real- or two-value static error be constant? Mitja Luštrek

  50. Two-value error larger at higher levels • Some simplifications: • branching factor = 2; • node values in [0, 1]; • consider only one type of error: wins mistaken for losses; • consider two levels at a time to avoid even/odd level differences. • X ... true real value of a nodeF (x) = P (X < x) ... distribution function of the true real valuee ... real-value errorX – e ... heuristic real valuet ... threshold • Two-value error:P (X > t X – e < t) = P (t < X < t + e) = F (t + e) – F (t) Mitja Luštrek

More Related