Agents that reduce work and information overload and beyond intelligent interfaces
Download
1 / 23

Agents that Reduce Work and Information Overload and Beyond Intelligent Interfaces - PowerPoint PPT Presentation


  • 157 Views
  • Uploaded on

Agents that Reduce Work and Information Overload and Beyond Intelligent Interfaces. Presented by Maulik Oza Department of Information and Computer Science University of California, Irvine moza@ics.uci.edu ICS 205 – Spring 2002. Agents that Reduce Work and Information Overload. Pattie Maes.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Agents that Reduce Work and Information Overload and Beyond Intelligent Interfaces' - kale


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Agents that reduce work and information overload and beyond intelligent interfaces

Agents that Reduce Work and Information OverloadandBeyond Intelligent Interfaces

Presented by

Maulik Oza

Department of Information and Computer Science

University of California, Irvine

moza@ics.uci.edu

ICS 205 – Spring 2002



Why agents
Why Agents?

  • Computers assisting in everyday tasks

  • Untrained users interacting with computers

  • Computers require continuous user interaction

  • “Indirect Management” required instead of “direct manipulation”

    • Collaboration with the user as a “personal assistant”


Figure: The interface agent does not act as an interface or layer between the user and the application. Rather, it behaves as a personal assistant which cooperates with the user on the task. The user is able to bypass the agent.


Agents duties
Agents Duties’ layer between the user and the application. Rather, it behaves as a personal assistant which cooperates with the user on the task. The user is able to bypass the agent.

  • Perform tasks on the users behalf

    • e.g. Selection of books

  • Train or teach the user

    • e.g. Image editing

  • Help different users collaborate

    • e.g. Meeting scheduling

  • Monitor events and procedures on the user’s behalf

    • e.g. Information filtering


Building agents problems
Building Agents – Problems layer between the user and the application. Rather, it behaves as a personal assistant which cooperates with the user on the task. The user is able to bypass the agent.

  • Competence

    • How does an agent acquire the knowledge?

  • Trust

    • How does the user feel confident delegating the task?


Previous approaches
Previous Approaches layer between the user and the application. Rather, it behaves as a personal assistant which cooperates with the user on the task. The user is able to bypass the agent.

  • End user program the interface agent

    • User programmed rules

    • Disadvantages

      • Does not deal with the competence criterion

      • Requires too much insight from the end user

  • Knowledge-based approach

    • Domain knowledge programmed into the agent

    • Disadvantages

      • Work for programming the knowledge

      • Adaptation to particular users preferences

      • Trust a big issue


Approach machine learning
Approach – Machine Learning layer between the user and the application. Rather, it behaves as a personal assistant which cooperates with the user on the task. The user is able to bypass the agent.

  • Under certain conditions the agent program itself

    • Limited background knowledge

    • Learns from user and other agents

  • Conditions for the agent to learn

    • Repetition an important aspect

    • Behavior different for all users

  • The metaphor – “personal assistant”

    • Learns based on the preferences of the employer

    • Requires time for performing efficiently

    • Learns based on experience, employer’s instructions as well as from experienced assistants


Advantages of the approach
Advantages of the Approach layer between the user and the application. Rather, it behaves as a personal assistant which cooperates with the user on the task. The user is able to bypass the agent.

  • Less work

  • Adaptation

  • Transferring Information


Learning technique
Learning Technique layer between the user and the application. Rather, it behaves as a personal assistant which cooperates with the user on the task. The user is able to bypass the agent.

  • Observe and imitate

  • Adapt based on user feedback

    • Direct feedback

    • Indirect feedback

  • Trained based on examples

  • Advice from other agents


Figure: The interface agent learns in four different ways: (1) it observes and imitates the user's behavior, (2) it adapts based on user feedback, (3) it can be trained by the user on the basis of examples, and (4) it can ask for advice from other agents assisting other users.


Example agents
Example agents (1) it observes and imitates the user's behavior, (2) it adapts based on user feedback, (3) it can be trained by the user on the basis of examples, and (4) it can ask for advice from other agents assisting other users.

  • Electronic mail handling agent

  • Meeting scheduling agent

  • Electronic news filtering agent

  • Recommending agent


Electronic mail agent maxim
Electronic Mail Agent – Maxim (1) it observes and imitates the user's behavior, (2) it adapts based on user feedback, (3) it can be trained by the user on the basis of examples, and (4) it can ask for advice from other agents assisting other users.

  • Learns to prioritize, delete, forward, sort and archive mail

  • Uses Memory-based reasoning

  • Measures confidence level in the prediction

  • Actions determined by thresholds

  • Dealing with initial low competence


Figure: Simple caricatures convey the state of the agent to the user. The agent can be "alert" (tracking the user's actions), "thinking" (computing a suggestion), "offering a suggestion" (confidence insuggestion is above "tell-me" threshold), "surprised" if the suggestion is not accepted, "gratified" if the suggestion is accepted, "unsure" about what to do in the current situation (confidence below "tell-me" threshold, and thus suggestion is not offered), "confused" about what the user ends up doing, "pleased" that the suggestion it was not sure about turned out to be the right one after all, and "working" or performing an automated task (confidence in prediction above "do-it" threshold).


Other agents
Other Agents the user. The agent can be "alert" (tracking the user's actions), "thinking" (computing a suggestion), "offering a suggestion" (confidence insuggestion is above "tell-me" threshold), "surprised" if the suggestion is not accepted, "gratified" if the suggestion is accepted, "unsure" about what to do in the current situation (confidence below "tell-me" threshold, and thus suggestion is not offered), "confused" about what the user ends up doing, "pleased" that the suggestion it was not sure about turned out to be the right one after all, and "working" or performing an automated task (confidence in prediction above "do-it" threshold).

  • Meeting Scheduling Agent

    • Generic learning agent adapted to the scheduling software.

  • News Filtering Agent – NewT

    • Filter Usenet news

    • Agents can be trained for specific purposes

  • Entertainment Selection Agent – Ringo

    • The “killer app”?

    • How to make enough data available to the system for it to make recommendations

    • User may rely too much on the system and stop entering new items

    • Solution – “virtual users”


Beyond Intelligent Interfaces: Exploring, Analyzing, and Creating Success Models of Cooperative Problem Solving

Gerhard Fischer

Brent Reeves


Cooperative problem solving
Cooperative Problem Solving Creating Success Models of Cooperative Problem Solving

  • Augmenting a person’s ability to create, reflect, design, decide and reason

  • Conceptual framework behind a system determines its behavior


Empirical study
Empirical Study Creating Success Models of Cooperative Problem Solving

  • Study of a success model

  • Highlights the inherent difficulties in high functionality systems

  • Necessary to get a better understanding of the system


Results from the study 1 2
Results from the study (1/2) Creating Success Models of Cooperative Problem Solving

  • Users do not know the existence of tools

  • Users do not know how to access tools

  • Users do not know when to use the tools

  • Users cannot combine or adapt tools for special uses


Results from the study 2 2
Results from the study (2/2) Creating Success Models of Cooperative Problem Solving

  • Incremental problem specification

    • Identifying the problem

  • Achieving shared understanding

    • Identifying the solution

  • Integration between problem setting and problem solving

    • Context important in determining the problem


Analysis based on the results
Analysis based on the results Creating Success Models of Cooperative Problem Solving

  • Natural Language is less important than Natural Communication

  • Multiple specification technique

  • Mixed initiative dialogues

  • Management of trouble

  • Simultaneous exploration of problem and solution spaces

  • Humans operate in the physical world

  • Humans make use of distributed intelligence


Requirements for a cooperative problem solving system
Requirements for a Cooperative Problem Solving System Creating Success Models of Cooperative Problem Solving

  • Beyond user interfaces

  • Problems in the context

  • Reliability of “Back talk” in design situations must be increased

  • Need for specialization and putting knowledge in the world

  • Supporting human problem-domain communication with domain-oriented architectures


Conclusions
Conclusions Creating Success Models of Cooperative Problem Solving

  • Interfaces of the future

    • Intelligent

    • Context aware

    • Trustworthy

    • Competent

    • Invisible

  • Issues

    • Privacy

    • Ethical


ad