1 / 28

Computers Playing Games

Computers Playing Games. Arif Zaman CS 101. Acknowledgements. Portions of this are taken from MIT’s open-courseware http://ocw.mit.edu/ Some items are adapted from Chapter 5 on Games by some professor who adapted it from notes by Charles R. Dyer, University of Wisconsin-Madison. Why Play?.

Download Presentation

Computers Playing Games

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computers Playing Games Arif Zaman CS 101

  2. Acknowledgements • Portions of this are taken from MIT’s open-courseware http://ocw.mit.edu/ • Some items are adapted from Chapter 5 on Games by some professor who adapted it from notes by Charles R. Dyer, University of Wisconsin-Madison

  3. Why Play? • Humans play for enjoyment, but computers…? • Games of strategy and skill require “intelligence”. We may learn about thinking by learning how to teach computers. • Hard but well defined problems, unlike other AI problems like speech, ethics etc. • Competition, knowledge-representation, … • To provide entertainment, competition and training to humans (that play or program).

  4. Game Theory • Von Neumann and Morgenstern analyzed two person zero-sum games, where each person takes a decision simultaneously, and then gets paid according to a payoff matrix. • Economic games, can be multi-person, multi-stage.

  5. Two Person Alternating moves Zero Sum Deterministic (not Backgammon or Ludo) Perfect Information (not Bridge, Hearts) Examples Tic-Tac-Toe Checkers Chess Go Reversi (Othello) Board Games

  6. History • 1949 Claude Shannon paper on Chess • 1951 Alan Turing simulation on Checkers • 1955 Chess Program using α-β. • 1966 MacHack • 1990 Belle (harware assist) • 2000 Deep Blue (serious hardware)

  7. Top-down Program • Repeat Until Done • DrawBoard • GetPlayerMove • CheckLegalMove ‘also check if game over • MakePlayerMove • DrawBoard • FindLegalMoves ‘also check if game over • EvaluateLegalMoves • MakeBestMove

  8. Static Position Evaluator • Given a position, come up with a “value”. • Values are 0…1 or 0…infinity or –infinity…infinity. • 0..1 can represent “probability” of my lose/win • In chess can count • my pieces – enemy pieces. • my total piece value – enemy total piece value. • Add points for Mobility • Add points for Center Control • Negative points for exposed king, etc. • This is where humans experts excel.

  9. Game Tree Mini-Max

  10. You can start off with a crude static evaluator, and a high ply minimax. Russians believed that better would be an excellent but slow static evaluator with lower plies. The extreme strategies are of course a perfect evaluator with 1-ply Or the complete game-tree search with trivial evaluator. Crude Evaluator

  11. Deep Blue • 32 node supercomputer, each with eight special chess processors. • 50 – 100 billion moves in 3 mins with a 13-30 ply search

  12. α-β pruning: We do not need to look at every possible move, especially if we have a good candidate. Quiessence: Static evaluate at peaceful situations. Go deeper into fights. Lookup Tables: for opening moves. Special program: for endgames. Other Tricks

  13. Other Games • Go is much harder. The best computer is far worse than the best human, even though the rules are very simple • Checkers computers are far better than the best humans • Tic-Tac-Toe is still a great mystry • Othello has a very interesting story.

  14. Start with four squares filled. Move next to opposite colored square. Change color of all opposite colored squared surrounded by moved piece and another piece of the same color. Must capture. Rules of the game

  15. D E Moriarty and R Miikkulainen, “Discovering Complex Othello Strategies Through Evolutionary Neural Networks” Created a strong player without any initial knowledge, by breeding a program! Evolutionary Start with a “population” of 100 “random” programs. Have a competition, and kill the 90 losers. Breed the 10 winners by “mixing” their “genes” with a bit of “mutation.” Do the same for many generations (1000’s). Reversi (Othello)

  16. Positional Strategy Programs quickly learned the basic positional strategy: Take corners. Avoid neighbors of corners Take neighbors of neighbors of corners. Mobility Strategy Keep low piece count Restrict opponents moves Discovered only once in Japan, and then everyone learnt it from them. Positional Strategy

  17. The mobility strategy (learned by the ENN) looks like it is loosing, but converts it to a win in the last 3-4 moves! Mobility vs Positional

  18. But… • Specially designed programs to play Othello still do even better eg. Bill or Logistello.

More Related