1 / 11

Characterizing the Performance of Applied Intelligent Agents in Soar

Characterizing the Performance of Applied Intelligent Agents in Soar. Randolph M. Jones, PhD Steve Furtwangler Mike van Lent, PhD. Soar Workshop 2011. Definitions. Research agents Agents built primarily for scientific goals (as opposed to application goals)

tao
Download Presentation

Characterizing the Performance of Applied Intelligent Agents in Soar

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Characterizing the Performance of Applied Intelligent Agents in Soar Randolph M. Jones, PhD Steve Furtwangler Mike van Lent, PhD Soar Workshop 2011 Soar Technology, Inc. Proprietary 6/12/2011

  2. Definitions • Research agents • Agents built primarily for scientific goals (as opposed to application goals) • To explore cognitive mechanisms and capabilities • Explicitly to evaluate the architecture • Applied agents • Agents developed for DoD projects, for which: • Evaluation of Soar was not a goal in the agent design • Primary design and implementation goal was to meet application-specific reasoning and behavior generation requirements • Efficiency optimization efforts were of secondary emphasis, as long as the systems met the project requirements • Acceptable performance • Execution time and memory requirements should not increase over time • Decision-cycle time for human-level reactivity is 50 msec (Newell, 1990) Applied Agent Performance in Soar Soar Workshop 2011

  3. Hypotheses • The Soar performance results demonstrated by other studies will also carry over to applied agent systems • H1: Over a long-running task, memory usage will initially increase monotonically then reach an asymptote • H2: Over a long-running task, execution time will initially increase monotonically then reach an asymptote • H3: Decision-cycle time will be well within the 50 msec theoretical limit for human-like reaction • Because applied agents are not primarily optimized for performance, there are ready opportunities to improve them • H4: Simple adjustments will improve performance, with diminishing returns for iterative improvements Addition empirical question • Using 50 msec reaction time as a limit, what is the order of magnitude for the number of applied intelligent agents that can be expected to run within acceptable performance constraints on typical modern computer hardware? Applied Agent Performance in Soar Soar Workshop 2011

  4. Soar Agents for Evaluation Applied Agent Performance in Soar Soar Workshop 2011

  5. Agent environment for experimentation: SimJr • Developed and maintained at SoarTech • Features: • Modular and extensible Java implementation • Low-fidelity models of fixed- and rotary-wing aircraft • Support for high-fidelity/intelligent model control • Exploits Java’s integration tools and environments • Fully open-source • Available from: • http://code.google.com/p/simjr/ Applied Agent Performance in Soar Soar Workshop 2011

  6. Result: Working memory size does not increase over time H1 Confirmed Applied Agent Performance in Soar Soar Workshop 2011

  7. Result: Kernel time/decision is very low and does not increase over time H2 and H3 Confirmed 2 orders of magnitude below 50 msec target Applied Agent Performance in Soar Soar Workshop 2011

  8. Result: Wall clock time/agent-decision is very low and does not increase over time RWA-SAR shows slight increase, to be investigated with future analysis H2 Confirmed with one exception, H3 Confirmed Applied Agent Performance in Soar Soar Workshop 2011

  9. Result: Simple optimizations improve execution time even further H3 Confirmed Each iterative optimization results in a smaller performance gain Applied Agent Performance in Soar Soar Workshop 2011

  10. Result: Multiple agents on a machine increase execution time slightly (cause to be determined) Applied Agent Performance in Soar Soar Workshop 2011

  11. Conclusions • Soar performance results demonstrated by research agents also carry over to applied agent systems of varying complexity • Performance of applied Soar agents does not degrade over the course of long-running tasks • There are opportunities to improve the performance of applied agents that were not designed with optimized performance as their primary goal • The number of applied agents of fairly high complexity that can be expected to run within acceptable performance constraints on typical, modern computer hardware is in the dozens • Future work: • Further experiments to provide full explanations for some cases of gradual performance slowing and increased overhead in multi-agent scenarios Applied Agent Performance in Soar Soar Workshop 2011

More Related