1 / 29

SOAR

SOAR . The Basics. SOAR is a theory of human cognition. connectionist approach. Allen Newell, one of the founders of modern cognitive science and artificial intelligence. Newell’s definition of intelligence. Introduction. Architecture by itself does nothing.

Download Presentation

SOAR

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SOAR

  2. The Basics • SOAR is a theory of human cognition connectionist approach Allen Newell, one of the founders of modern cognitive science and artificial intelligence. Newell’s definition of intelligence

  3. Introduction • Architecture by itself does nothing. BEHAVIOR = ARCHITECTURE X CONTENT A cognitive architecture must help produce cognitive behavior. Soar is a theory of what cognitive behaviors have in common.

  4. A Professional Baseball Team • Each position on a team could be a representation of an agent. • Each agent has the over all goal of winning the game. • Each has sub-goals of being successful at his position. • SOAR could be used to represent this team as a software model.

  5. Agents Make Soar Unique Agents are software models of real world objects. They are objects that react to and produce intelligent behavior. Dynamic Imperfect knowledge Prioritize the actions/ decisions Computational limitations

  6. Agent Capabilities • Perception- sensing the environment Acton- respond to the environment Planning- map out and decide what they will do before they do it. Learning- incorporate from their environment Cooperation & Coordination- able to cooperate and coordinate with other agents using Natural Language Capabilities. Meta-Reasoning- ability to learn rationally. They can learn why.

  7. A Scene from out Game: • Agent John Doe is a Pitcher throwing his first pitch of his major league career. • He chooses to throw a curve ball. • The batter Joe Schmoe, hits the ball, but John is able to catch it after it bounces between home plate and the pitching mound. • John quickly throws the batter out at first base.

  8. Architecture: Goal Driven • The Goal Context is the heart of the architecture. Defined by four slots and their values: • The goal: The motivation, direction • The problem space: organization • The State: an internal representation of the situation. • The operator: the means to get from point a to pointb.

  9. Joe’s Goals and the State • Joe’s Ultimate goal is to win the game along with his team • His immediate goal is to get the batter out. • His operators are the types of pitches that he can throw • The state space includes the batter, the runners on base, and the current count etc.

  10. Memory • Productions are memory structures, represented often as if-then statements Constantly being matched Lowest level of memory

  11. Productions • There are several features that make them models of human memory: • They are associational in nature. • They are independent of domain which allows for continuous incremental learning • They are dynamic and cognitively impenetrable.

  12. Architecture: LTM • map from the current goal context in WM to a new goal context. The mapping is from context to context, triggers an association • LTM is what is true in general: • If John throws three strikes in a row thenthe batter is out. • IfJohn throws four balls in a row then the batter walks to first.

  13. Architecture: Working Memory • WM arise in one of two ways: external perception or associations holds the results of perception as values in current state. four kinds of objects: goals, problems spaces, states, and operators. • WM is what the model thinks is true in a particular situation: • John’s working memory tells him that he is about to pitch to Joe Schmoe and that he has the goal of getting Joe out. His choices and the current state are all part of the working memory. This changes with the batter and the status of the game.

  14. Decision Cycle • Moving from the general to the specific. the processing component that generates behavior out of the content that resides in the LTM and WM recognize-decide-act cycle Two Phases: Elaboration and Decision

  15. Architecture: The Decision Cycle I, Elaboration phase ALL productions that match the current state fire, producing new content in the working memory. • For example if the batter Joe was determined to be left handed then a new associations would be made.

  16. Architecture:The Decision Cycle II, Decision phase • Decision procedure interprets and suggests changes to the context. • The result is either a single change, or an impasse • There a limit on how much cognition can do at once.

  17. Example Continued • After the elaboration phase it is determined that John should either throw a curve ball or throw a fast ball based on all of the knowledge. • Based on the possibilities the decision phase determines that there is not enough information to make a decision. There is no basis for a preference, an impasse has been reached.

  18. Architecture: Impasses • There is an opportunity for learning. An impasse occurs automatically whenever there isn’t enough knowledge. Independent of any domain. Automatically begins the creation of a new sub-goal context whose goal is to resolve the impasse.

  19. Example • Opportunity for learning. • Past Success Rate? • Joe recalls that in the past he is more successful with the curve ball. • Throws a curve ball. Unfortunately, the ball is hit and John recovers the ball and throws Joe out at first.

  20. Architecture: Chunking I • The primary learning mechanism. Automatically creates new associations in LTM whenever results are generated from an impasse. The new associations map relevant pre-impasse WM elements changes that prevent that impasse in future

  21. Architecture: Chunking II • Chunking serves many purposes • integrate different types of knowledge • speed up behavior • basis of inductive learning, analogical reasoning, etc. • only architectural mechanism for changing LTM, it is assumed to be the basis of all types of learning in people.

  22. Example • Take the information, weather, time of day, batter, field conditions, count etc. Therefore the next time John may consider his success rate and other factors to avoid the impasse in the future. Produces preferences, a preference added that states if it is windy then throw less fast balls.

  23. What Makes Soar Different • ALL knowledge that is relevant is activated rather than just matching rules and activating them The Beliefs about the world are constantly being updated automatically. Not maintained by other rules Agents use Preferences. The agent can express knowledge about which option it prefers in the current situation. Create new and multiple states. RB systems have only one state Generates new knowledge and allows the agent to avoid it in the future.

  24. Real World Applications • There have been several theories developed from the original SOAR architecture: • NL-Soar – natural language comprehension • SCA - theory of symbolic concept • NTD-Soar - a computational theory of perceptual, cognitive and motor actions performed by the NASA Test Director (NTD) • IMPROV - a computational theory of how to correct knowledge about what actions do in the world.

  25. Commercial Applications • Soar Technologies, Inc. The goal of Soar Technology, Inc. is to increase the realism of battle simulations by developing intelligent automated synthetic forces. ExpLore Reasoning Systems, Inc.ERS builds intelligent software solutions for the mutual-fund, mortgage, credit-card, and insurance industries.

  26. Problems With SOAR • The programming for soar’s chunking can be very complex and difficult. The Einstellung Effect The Power Law of Learning • There are disadvantages Soar’s architecture. • large chunking may leads to incorrect knowledge

  27. The Future • Ultimately SOAR has great potential but it has many of the same limitations that humans have in learning. • Perhaps humans aren’t the best model for intelligent behavior. Maybe there isn’t a perfect model. • Artificial Intelligence can and will get better.

  28. References • Lehman, Laird, Rosenbloom, A Gentle Introduction to Soar, an Architecture for Human Cognition (1993) http://ai.eecs.umich.edu/soar/main.html • Cognitive modeling, symbolic. Wilson, Keil (edl.), The MIT Encyyclopedia of the Cognitive Sciences. Cambridge, MA: MIT Press. • Soar Technology, Soar: A Comparison with Rule-based Systems, 2002 Soar Technology, Inc. http://ai.eecs.umich.edu/soar/main.html • Soar Technology, Soar: A Functional Approach to General Intelligence, 2002 Soar Technology, Inc. http://ai.eecs.umich.edu/soar/main.html • Soar Technology Soar: Along the Frontiers, 2002 Soar Technology, Inc. http://ai.eecs.umich.edu/soar/main.html • Cognitive Theory, SOAR, Lewis, Richard L. Ohio State University, 1999 • http://ai.eecs.umich.edu/cogarch2/index.html. Cognitive Architectures

More Related