1 / 10

Transfer Learning and Intelligence: an Argument and Approach

Transfer Learning and Intelligence: an Argument and Approach. Matthew E. Taylor Joint work with: Gregory Kuhlmann and Peter Stone. Learning Agents Research Group Department of Computer Sciences The University of Texas at Austin. Result Summary: AGI-08.

Download Presentation

Transfer Learning and Intelligence: an Argument and Approach

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Transfer Learning and Intelligence:an Argument and Approach Matthew E. Taylor Joint work with:Gregory Kuhlmann and Peter Stone Learning Agents Research Group Department of Computer Sciences The University of Texas at Austin

  2. Result Summary: AGI-08 • Help select source task for given target • Transfer a search heuristic

  3. AGI & Learning Why Learn? • Better solutions • On-line adaptation Current Problems: • Commonly applied to simple tasks • Algorithms often data-inefficient • Need substantial amounts of human knowledge One possible answer: • Transfer Learning

  4. Transfer Learning(related to Lifelong Learning or Multi-task Learning) Learn across multiple tasks: • Learn faster • Harder tasks become tractable • Learn with less human input • Prerequisite for AGI?

  5. Transfer Examples • Learn difficult tasks faster • Learn a set of simple tasks • Eventually learn target task • Total time reduction • Autonomous transfer • Explore the world, learning • Transfer autonomously • Effectively use past knowledge

  6. Transfer in Reinforcement Learning Source Task Target Task Environment Action State Reward Agent Environment Action State Reward Agent

  7. Representative Transfer Results

  8. What to transfer? • Policy: π(s) → a • Action-value function: Q(s,a) → R • Model of the environment: T(s, a) → s’ • Rules / Advice • Higher-level information • Search heuristic • Learning rates • Appropriate features Environment Action State Reward Agent

  9. How to transfer? Human design (engineering task) • Construct a sequence of tasks • Provide learner with mappings between tasks Fully autonomous (not yet achieved) • Learn if tasks are related • Learn how tasks are related ? ?

  10. Result Summary: AGI-08 • Help select source task for given target • Transfer a search heuristic • General Game Playing task W13: Transfer Learning for Complex Tasks

More Related