1 / 58

Dimensions of Scalability in Cognitive Models Research Team:

Dimensions of Scalability in Cognitive Models Research Team: Carnegie Mellon University - Psychology Department Dr. Christian Lebiere Dr. David Reitter Dr. Jerry Vinokurov Michael Furlong Jasmeet Ajmani. Overview. Goal: Scaling up high-fidelity cognitive models by Composing models

niyati
Download Presentation

Dimensions of Scalability in Cognitive Models Research Team:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dimensions of Scalability in Cognitive Models Research Team: Carnegie Mellon University - Psychology Department Dr. Christian Lebiere Dr. David Reitter Dr. Jerry Vinokurov Michael Furlong Jasmeet Ajmani

  2. Overview • Goal: Scaling up high-fidelity cognitive models by • Composing models • Abstracting models • Running large networks of models • ACT-UP: a toolkit view of cognitive architectures • Same validated functionality, different form • Lemonade game: Reusing and integrating models • Language learning: Scaling up to network cognition • The Geo-Game: Bringing it all together • Platform for experimentation and integration

  3. Dimensions of Scaling

  4. ACT-R Cognitive Architecture • Computational implementation of unified theory of cognition • Commitment to task-invariant mechanisms • Modular organization • Parallelism but strong attentional limitations • Hybrid symbolic/ statistical processes

  5. Issues with Cognitive Modeling • High-fidelity cognitive models provide very accurate models of all observable dimensions of cognition (time, accuracy, gaze, neural) but • They are computationally intensive as they simulate all cognitive processes in full detail • They are labor intensive to specify all aspects of cognitive performance (knowledge, strategies) • They are specialized to a given task in a way that makes them difficult to compose and reuse • They usually focus on single-agent cognition

  6. Scaling Up Cognitive Modeling • Enable the implementation of more complex cognitive models in a more efficient manner • Scale up the application of cognitive models to simulate learning and adaptation in communities (e.g., 1,000 models in parallel) • Enable reuse and composition of cognitive models similar to software engineering view • Facilitate integration of cognitive models with other modeling and simulation platforms • Improve maintenance, update and validation

  7. The Approach • Difficulties: ACT-R is heavily constrained already, and models are difficult to develop, reuse and exchange • Constraints: Architectural advances require further constraints, e.g. more representational constraints • Scaling it up: Complex tasks, broad coverage of behavior, multi-agent cognition and predictive modeling may motivate further architectural changes • Solution: produce models at a higher abstraction level • Retain and emphasize key cognitive mechanisms • Abstract purely mechanistic model aspects • Precisely specify model claims, underspecify/fit rest • Benefits of abstraction in efficiency, scalability, reuse

  8. Cognitive Strategy Symbolic deterministic Subsymbolic (Learning / Adaptation) non-deterministic explains empirical variance

  9. Underspecified Models underspecify: deterministic specify: non-deterministic explains empirical variance

  10. (Lisp Functions)

  11. ACT-UP vs ACT-R 6 • Declarative memory: chunks as objects • Explicit context specification; all activation computations • Procedural memory: productions as functions • Explicit conflict set groups; utility reinforcement learning • ACT-UP is synchronous with serial execution • Parallelism in process of being implemented • Perceptual-motor modules being planned

  12. Validation • Against canonical ACT-R tutorial models data

  13. Efficiency • Sentence production (syntactic priming) model • 30 productions in ACT-R, 720 lines of code • 82 lines of code in ACT-UP (3 work-days) • ACT-R 6: 14 sentences/second • ACT-UP: 380 sentences/second

  14. Scalability • Language evolution model • Simulates domain vocabulary emergence (ICCM 2009, JCSR 1010) • 40 production rules in ACT-R • Complex execution paths: could not prototype • 8 participants interacting in communities • In larger community networks: • 1000 agents • 84M interactions (about 1 min. sim. Each) • 37 CPU hours

  15. Related Work • Douglass (2009; 2010) on large declarative memories • Implementation through Erlang threads • Focus on scalability • Salvucci (2010) work on supermodels • Integrating and validating independent models • Focus on instruction interpretation for generality • Stewart and West (2007) work on Python-ACT-R • Similar deconstructive view of architecture • Integration with neural constructs

  16. Future Work • Complete validation against canonical model set; currently in beta testing; full release planned for spring 2011 • Possible collaboration with AFRL Mesa on implementation of finite-state-based systems • Potential use in other projects (Minds Eye, Robotics CTA) • Allow optional parallelism where needed and desired • Implement perceptual and motor modules • Potential implementation in other languages (C++, Java) to facilitate code-level integration with common frameworks Reitter, D., & Lebiere, C. (2010). Accountable Modeling in ACT-UP, a Scalable, Rapid-Prototyping ACT-R Implementation. In Proceedings of the 2010 International Conference on Cognitive Modeling. Philadelphia, PA. Lebiere, C., & Reitter, D. (2010). ACT-UP: A Cognitive Modeling Toolkit for Composition, Reuse and Integration. In Proceedings of the 2010 MODSIM conference. Hampton, VA. Lebiere, C., Stocco, A., Reitter, D., & Juvina, I. (2010). High-fidelity cognitive modeling to real-world applications. In Proceedings of the NATO Workshop on Human Modeling for Military Application, Amsterdam, NL, 2010.

  17. Cognitive principles in cooperative and adversarial games:Metacognition transfers via ACT-UP Networks (Distributed Knowledge) Communities (Teamwork) Dyads (Dialogue) Individuals Complex Tasks, Broad-Coverage Models Controlled Tasks, High-Fidelity Models

  18. ACT-UP: Rapid prototyping/Reuse • Dynamic Stocks & Flows ACT-UP model • Winning modeling competition entry • Model written in < 1 person-month • Free parameters (timing) estimated from example data • Model generalized to novel conditions • Reuse of Metacognitive Strategy in the Lemonade Stand Game (BRIMS 2010) Kevin A. Gluck, Clayton T. Stanley, Jr. L. Richard Moore, David Reitter, and Marc Halbrügge. Exploration for understanding in model comparisons. Journal of Artificial General Intelligence (to appear), 2010. David Reitter. Metacognition and multiple strategies in a cognitive model of online control. Journal of Artificial General Intelligence (to appear), 2010. David Reitter, Ion Juvina, Andrea Stocco, and Christian Lebiere. Resistance is futile: Winning lemonade market share through metacognitive reasoning in a three-agent cooperative game. In Proceedings of the 19th Behavior Representation in Modeling & Simulation (BRIMS), Charleston, SC, 2010.

  19. Multi-agent Games • 2x2 games such as the Prisoner’s Dilemma • Evolution of cooperation vs. competition • Memory-based expectations (Lebiere et al, 2001) • Adversarial games (Paper Rock Scissors, Baseball) • Zero-sum competition where predictability is fatal • Sequence-based expectations (Lebiere et al, 1998; 2003) • Lemonade game (3 players) • Simultaneous cooperation and competition • Predictability can be desirable for cooperation

  20. The Lemonade Stand Game • In each iteration, each of three players chooses a location 1..12 • Payoff is proportional to the distance to left and right neighbors. • Hidden moves (blind choice) • 1 game: 100 iterations, then reset (no state across games) Zinkevich (2010, unpublished)

  21. Basic Strategies • Random (unpredictable): choose random loc. • Sticky (predictable): choose same location • Roll, SquareRoot • Tournament with those four agents • Equal performance

  22. Strategy Elements • Offer Cooperation: Be predictable • Predict: Learn patterns of opponents • Maximize Utility: Choose highest expected payoff • Cooperate: Pick “friendly” opponent whose payoff is also maximized • Monitoring: analyzing own/others performance, keep history

  23. Strategies

  24. Metacognition • Facility to constantly monitor performance, and to adapt behavior accordingly • Choose the best-performing strategy out of a set of strategies (Flavell 1979, Brown 1987) • Strategy-shifting assumed in Dynamic Stocks & Flows data (DSF Challenge)

  25. General Metacognition • Prediction of each opponent’s next move • Learn from agent’s history in this game • Multiple possible representations and pattern-matching • Action: Making a move • Optimize Utility • Suggest cooperation • Cooperate • Hurt the worst adversaries

  26. Evaluating Strategies • Prediction and Action strategies are learned as episodes (instances): • Each prediction strategy per iteration, per opponent • one action strategy per iteration • Instance-based learning (Gonzales&Lebiere 2003) • Objective: Prediction quality/Action payoff • Blending: weighted mean (recency, frequency, objective as above)

  27. Metacognition in Prediction as in Reitter (2010) - DSF model • Each prediction strategy suggests a next location for each opponent • All past predictions are stored throughout the game: <t,l,p> (time, actual location, predicted probability of that location) ACT-R Activation (recency, frequency) Expected success of strategy s and agent a Episode in memory: time t, actual chosen location l of agent a, predicted probability p for l,a by strategy s Metacognition for Actions is similar

  28. Evaluation • Outcome of each strategy depends on configuration of players • Some strategies will cooperate • Metacognitive strategy is flexible, achieves consistently high results • Bigger circle: higher winnings. Darker circle: consistent results.

  29. Tournament

  30. Adaptive Multi-Agent Behavior • Offering cooperation and cooperating with the right opponent are crucial to doing well • Metacognitive layer allows an agent to trump all others through generality and adaptivity • Research questions: • Human performance in cooperative games: issues of trust, social and cultural biases • Memory activation and rational retrieval expectations as proxy for weighing past strategy success – limits of metacognition

  31. Future Work • ACT-R/ACT-UP’s learning vs. more basic Bayesian models: is cognitive learning more robust through open-endedness? • Break down current limits of cognitive models generality • Are canonical architectural parameters optimal through coevolution for empirical clustering factors and degrees? • Key part of environment is social interactions • Automatic acquisition of rules, strategies, structural representations rather than modeler specification • Metacognition: accumulation of micro-strategies library into reusable, general-purpose metacognitive layer • Combination of above provide way of breaking out of task-specific models and their assumptions: beyond task-specific parameters, representation, strategies

  32. Scaling Up Cognitive Models from Individuals to Large NetworksThe case of communication in human communities Dr. Christian Lebiere Dr. David Reitter Carnegie Mellon University Networks (Distributed Knowledge) Communities (Teamwork) David Reitter and Christian Lebiere. Towards explaining the evolution of domain languages with cognitive simulation. Cognitive Systems Research (in press), 2010. Dyads (Dialogue) Individuals Complex Tasks, Broad-Coverage Models Controlled Tasks, High-Fidelity Models

  33. Garrod & Pickering 2004: Syntactic Representation Syntactic Representation Interactive Alignment from: Garrod &Pickering, BBS 2004

  34. Rapid decay within 8-10 secondsexperimentally, for selected constructions: Levelt & Kelter (1982),Branigan et al. (2000) Long-term adaptation effects, which do not decay, have also been observed (Comprehension: Mitchell et al. 1995. Production: Bock&Griffin 2000) ACT-R’s declarative memory decayexplains the repetition probability decay Adaptation in Language Reitter (2008) (Switchboard corpus)

  35. Interactive Alignment Syntactic and Lexical Adaptation Predict Task Success! (Reitter & Moore 2007) Lexical Representation Lexical Representation from: Pickering&Garrod, BBS 2004

  36. Domain Language Experiment • Vocabulary: Signs as meaning-signifier combinationSimple Communication System: Lewis 1989, Hurford 1989, Oliphant&Batali 1996 • Naming game: an idealized transaction between two players • Pictionary: a directordrawsa given target concept using elementary drawings; a matcher has to guess the concept. • 20 target concepts, repeated • Director/Matcher receive no explicit feedback “Brad Pitt” • Fay et al., Cognitive Science 34(3), 2010. Kirby et al., PNAS 2008; Fay et al. PhilTransRoySoc-B 2008

  37. Pictionary Performance Community Isolated Pairs (empirical) partner switch (communities) partner switch (communities) partner switch (communities) ID accuracy: proportion of signs retrieved From data by Fay et al. 2010

  38. Broad Questions • How does the architecture of human cognition interact with social structure? • Have the human mind and large-scale social structures co-evolved? • Can modeling predict the kinds of team structures that will yield optimal communication and collaboration?

  39. Pictionary Model in ACT-UP • Ontology shared betweendirector and matcher • abstract target concepts • concrete drawings • link weight distribution acquired from Wall Street Journal collocations • Director chooses three related drawings to convey a target concept • Choice is conventionalized • Decision-making and memory retention modeled with ACT-UP Ontology weighted link

  40. Pictionary and Networks Community Isolated Pairs (ACT-UP model) partner switch (communities) partner switch (communities) partner switch (communities) ID accuracy: proportion of signs retrieved 100 rep. Reitter&LebiereJournal of Cognitive Systems Research, in press

  41. Scaling up to Networks Dr. Christian Lebiere Dr. David Reitter Carnegie Mellon University Networks (Distributed Knowledge) Communities (Teamwork) Reitter, D., & Lebiere, C. (2010). Did social networks shape language evolution? A multi-agent cognitive simulation. In Proc. Cognitive Modeling and Computational Linguistics Workshop (CMCL 2010), Uppsala, Sweden. Dyads (Dialogue) Individuals Complex Tasks, Broad-Coverage Models Controlled Tasks, High-Fidelity Models

  42. Research Questions • Does network structure affect convergence towards a common community vocabulary? • Or: Is declarative memory robust w.r.t. a variety of network structures? • The small-scale, empirical and modeling data suggest that extreme networks (fully vs. disconnected) arrive at similar performance, but converge differently. How? Why? • Larger communities that differ in their connectivity are needed to answer these questions.

  43. Network Types • In a network, only network neighbors play the naming game • Social: Small-World network (low path length, high clustering coefficient, assortatively mixed by degree) • Grid (torus) • Random Graph • Organizational: Trees • Controlled: mean degree (except trees), number of nodes • Here: 512 nodes, mean deg. 6., 50 rep. per condition

  44. ID Accuracy: Neighbors Network type: *** random<grid<smallworld<tree MCMC on LMER log(IDacc) ~ center(round) + cond + (1|sequence) preliminary results

  45. ID Accuracy: Random Pairs Indication of convergence towards common vocabulary across network (measured after round 35) Small World Random Grid Tree preliminary results Tree vs. others: n.s. (p=0.14, MCMC on LMER log(IDacc) ~ cond + (1|sequence))

  46. Summary • Online Linguistic Adaptation is a known phenomenon • syntactic, lexical. Between two and more participants. • Nodes can adapt to their immediate surroundings • Tree hierarchies function very well when stable, but are not robust to structural change • Tree hierarchies represent contemporary organizational hierarchies and generalize typical command structures • Small-World structures are more robust to change.

  47. Future Work • Which advantages do non-tree network organizational structures have in situations where environment/ground truth changes, where adversarial elements are present? • How can temporal dynamics in network structure (gradual ramp-up in connectivity) support information convergence (domain vocabulary acquisition)? • Do cognitive models require explicit information processing policies in non-tree hierarchies, such that accountability and reliability are preserved? • Integration of communication with planning, control and decision-making in complex dynamic domains.

  48. Information Exchange in Networks • Simulation at cognitive level: Language Evolution Model • Simulation with Bayesian LearnersWang et al. (CMU Robotics), for a Bayesian Belief Update network • Empirical validation is rare • Real-time communication networks are rarely studied • Most empirical datasets contain asynchronously produced communication, lacking control over exchanged information (e.g., Enron or Twitter corpora)

  49. Human Networks: Empirical Experiments with the Geo Game Dr. Christian Lebiere Dr. David Reitter Psychology, Carnegie Mellon University Dr. Katia Sycara Antonio Juarez Dr. Paul Scerri Dr. Robin Glinton Robotics Institute, Carnegie Mellon University Dr. Michael Lewis University of Pittsburgh Networks (Distributed Knowledge) Communities (Teamwork) Dyads (Dialogue) Individuals Complex Tasks, Broad-Coverage Models Controlled Tasks, High-Fidelity Models

  50. MURI Team: The Geo Game Pitt CMU Robotics CMU Psychology Cornell MIT GMU Scaling of cognitive performance and workload Level 1,3 Level 1,2 Level 1 Level 2 Level 1-2.5 Level 1-3 Level 1 Level 1,3 Task allocation among humans/agents Probabilistic models of human decision-making in network situations Level 1,2 Level 1-2.5 Level 3 Level 2 Level 1-3 Decentralized control search and planning Level 1,2 Level 2 Level 1,2 Information fusion Level 1,2 Level 1-3 Level 1,3, 4 Network performance as a function of topology Level 4 Level 2 Communication, evolution, language Level 3 Level 2, 3 Adaptive automation Level 1,2 Level 1

More Related