1 / 21

AGI Architectures & Control Mechanisms

AGI Architectures & Control Mechanisms. Anatomy of an AGI system. Realworld environment. Data. Sensors. Processes. Actuators. Control. NARS: Judgement.

erwin
Download Presentation

AGI Architectures & Control Mechanisms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AGI Architectures & Control Mechanisms

  2. Anatomy of an AGI system Realworld environment Data Sensors Processes Actuators Control Intellifest 2012

  3. NARS: Judgement An input judgment is a piece of new knowledge to be absorbed. To process such a task not only means to add it to the knowledge base (memory) of the system, but also means to use it and the existing knowledge to derive new knowledge, by forward inference.

  4. NARS: Question An input question is a user query to be answered. To process such a task means to find a judgment that answers the question as well as possible (as defined by the choice rule). Backward inference may be used to get answers through derived questions.

  5. NARS: Goal An input goal is a user command to be followed, or a statement to be realized. To process such a task means to check if the statement is already true, and if not, to execute some operation to make the statement true. Backward inference may also be used to generate derived goals.

  6. NARS: Control Mechanism • Responsible for resource management • Choosing premises and inference rules in each inference step • Memory allocation

  7. NARS: Control Mechanism • Time-sharing • Single inference step per time-slice • Probabilistic task selection • Probability proportional to priority • Priority value for each task • [0,1] • Priority is relative • Depends on operational context of the system • Priority is manually assigned for input tasks but automatically for derived tasks

  8. NARS: Control Mechanism Input question Q Derived questions Q1 Q2 Q3 Termination of input question (successful or otherwise) does not automatically remove derived questions from the task pool.

  9. NARS: Control Mechanism • One task: t1 • The task (t1) will get all available resources • Two tasks: t1 and t2 • Each has respective priority value p1 and p2 • Resources received by each task in near future determined by the ratio p1 : p2 • The resources allocated to a task depends not only on its priority value, but also the priority values of other active tasks

  10. NARS: Control Mechanism • A task aging factor: Durability • [0,1] • Makes priority values decay gradually • Task priority: Durability * Priority • Constant re-evaluation of durability and priority based on context • If a good solution has been found, priority is decreased

  11. NARS: Anytime processing • Traditional systems: answer/solution reported at final state • Anytime processing: make best available solution found so far available at any time while still seeking better solutions • Finding optimal solution requires exhaustive search • Not possible in most cases • Satisfactory rather than optimal solutions • Amount of processing based on competition for resources

  12. NARS: Memory Structure • Special data structure for system with insufficient resources: Bag • Probabilistic priority queue • Contains items, each having priority value • Two major operations: • Put in: Inserts the item in the bag, if bag is full, existing item with lowest priority is removed • Take out: Returns one item from a non-empty bag, chosen in probabalistic fashion based on priority

  13. NARS: Concept • All tasks and beliefs that share a common term make up a concept, where the concept is the shared term • The concept „bird“: • raven -> bird • pigeon -> bird • bird -> swimmer • bird -> animal • Any valid inference step necessarily happens within a single concept • Thus, concept is a unit of resource allocation • System resources first distributed by concept, then within each concept among tasks and beliefs

  14. NARS: Memory structure Concept Concept Concept Beliefs Tasks Beliefs Tasks Beliefs Tasks Two-level memory structure Beliefs connect one concept to another concept

  15. NARS: Execution Cycle • 1. To check the input buffer. If there are new (input or derived) tasks, add them into the corresponding concepts (after some simple preprocessing). • 2. To take out a concept from the concept bag. • 3. To take out a task from the task bag of the concept. • 4. To take out a belief from the belief bag of the concept. • 5. To apply inference rules on the task and belief. Which rule is applicable is determined by the syntax of the task and the belief. • 6. To adjust the priority and durability values of the involved task, belief, and concept, according to the quality of the results. • 7. To return the involved task, belief, and concept are returned to the corresponding bags. • 8. To put the results generated in this step into the input buffer as new tasks. If a result happens to be a best-so-far answer of a question asked by the user, it is reported to the user.

  16. NARS • NARS is a reasoning system • This has some implications • Perception and motor control are not a natural fit • Work underway to add these capabilities

  17. The LIDA cognitive architecture • Biologically / psychologically inspired • Hybrid (symbolic/subsymbolic) • Implementation of global workspace theory

  18. LIDA overview

  19. The LIDA cognitive cycle • 1. Understanding phase • Low & high level feature detection • Sensory input activates existing knowledge • Several types of memory • Produces a model of the current operating situation • 2. Attention phase • Coallitions of most salient information in current situational model formed • Resulting coallitions compete for attention • Winners of the competition broadcast globally throughout the system • 3. Action selection & learning phase • New entities, associations, reinforcement • Possible action schemes instantiated from Procedural Memory and sent to the Action Selection module • Possible action schemes compete for execution in the Action Selection module, with the winners being executed

  20. Model of visual attention based on global scene factors (Torralba)

  21. Three layer model of visual attention (Mancas et al.)

More Related