A cultural sensitive agent for human computer negotiation
This presentation is the property of its rightful owner.
Sponsored Links
1 / 22

A Cultural Sensitive Agent for Human-Computer Negotiation PowerPoint PPT Presentation


  • 35 Views
  • Uploaded on
  • Presentation posted in: General

A Cultural Sensitive Agent for Human-Computer Negotiation . Galit Haim , Ya'akov Gal, Sarit Kraus and Michele J. Gelfand. Motivation. Buyers and seller across geographical and ethnic borders electronic commerce: crowd-sourcing: deal-of-the-day applications:

Download Presentation

A Cultural Sensitive Agent for Human-Computer Negotiation

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


A cultural sensitive agent for human computer negotiation

A Cultural Sensitive Agent for Human-ComputerNegotiation

GalitHaim,

Ya'akov Gal, Sarit Kraus and Michele J. Gelfand


Motivation

Motivation

  • Buyers and seller across geographical and ethnic borders

    • electronic commerce:

    • crowd-sourcing:

    • deal-of-the-day applications:

  • Interaction between people from different countries

     to succeed, an agent needs to reason about how culture affects people's decision making


A cultural sensitive agent for human computer negotiation

Goals and Challenges

Can we build an agent that will negotiate better than the people in each countries?

The approach

1. Collect data on each country

2. Use machine learning

3. Build influence diagram

Sparse Data

Can we build proficient negotiator with no expert designed rules?

Noisy Data

Culture sensitive agent?


The colored trails ct game

The Colored Trails (CT) Game

  • An infrastructure for agent design, implementation and evaluation for open environments

  • Designed in 2004 by Barbara Grosz and Sarit Kraus (Grosz et al AIJ 2010)

CT is the right test-bed to use because it  provides a task analogy to the real world


The ct configuration

The CT Configuration

  • 7*5 board of colored squares

  • One square is the goal

  • Set of colored chips

  • Move using a chip in the

    same color

5


Ct scenario

CT Scenario

  • 2 players

  • Multiple phases:

    • communication: negotiation

      (alternating offer protocol)

    • transfer: chip exchange

    • movement

  • Complete information

  • Agreements are not enforceable

  • Complex dependencies

  • Game ends when one of the players: reached the goal or did not move for three movement phases


Scoring and payment

Scoring and Payment

  • 100 point bonus for getting to goal

  • 5 point bonus for each chip left at end of game

  • 10 point penalty for each square in the shortest path from end-position to goal

  • Performance does not depend on outcome for other player


Personality adaptive learning pal agent

Personality, Adaptive Learning (PAL) Agent

  • Data from specific country

machine learning

Decision Making

Human behavior model

Take action

8


Learning people s reliability

Learning People's Reliability

Predict if the other player will keep its promise


Learning how people accept offers

Learning how People Accept Offers

Accept or reject the proposal?


Feature set

Feature Set

  • Domain independent feature:

    • Current and Resulting scores

    • Offer generosity

    • Reliability: between 0 (completely unreliable) to 1(fully reliable)

    • Weighted reliability: over the previous rounds in the game

  • Domain dependent feature:

    • Round number


How to model people s behavior

How to Model People's Behavior 

  • For each culture:

    • Use different features 

    • Choose learning algorithm that minimized error using 10-fold cross validation

  • In US and Israel - we only used domain independent features

  • In Lebanon we added domain dependent features


Data collection with sparse data

Data Collection with Sparse Data

  • Sources of data to train our classifiers:

    • 222 game instances consisting of people playing a rule-based agent

    • U.S. and Israel: collect 112 game instances of people playing other people

    • Lebanon: collect 64 additional games

      • “Nasty agent”: less reliable when fulfilling its agreement

The Lebanon people in this data set almost always kept the agreements and as a result, PAL never kept agreements


People learned reliability

People Learned Reliability


Experiment design

Experiment Design

  • 3 countries: 157 people

    • Israel: 63

    • Lebanon: 48

    • U.S.A: 46

  • 30 minutes tutorial

  • Boards varied dependencies between players

  • People were always the first proposer in the game

  • There was a single path to the goal


Decision making

Decision Making

There are 3 decisions that PAL needs to make:

  • Reliability: determine the PAL transfer strategy

  • Accepting an offer: accept or reject a specific offer proposed by the opponent

  • Propose an offer

Use backward induction over two rounds…


Success rate getting to the goal

Success Rate: Getting to the Goal


Performance comparison averages

Performance Comparison: Averages


Example in lebanon

Example in Lebanon

  • 2 chips for 2 chips; accepted  both sent

  • 1 chip for 1 chip; accepted

  • PAL learned that people in Lebanon were highly reliable PAL did not send, the human sent

games were relatively shorter

people were very reliable in the training games

19


Example in israel

Example in Israel

  • 2 chips for 2 chips; accepted  only PAL sent

  • 1 chip for 1 chip; accepted  the human only sent

  • 1 chip for 1 chip; accepted  the human only sent

  • 1 chip for 1 chip; accepted only PAL sent

  • 1 chip for 3 chips; accepted only the human sent

people were less reliable in the training games than in Lebanon

games were relatively longer


Conclusions

Conclusions

  • PAL is able to learn to negotiate proficiently with people across different cultures

  • PAL was able to outperform people in all dependency conditions and in all countries

This is the first work to show that a computer agent can learn to negotiate with people in different countries


Colored trails is easy to use for your own research

Colored trails is easy to use for your own research

  • Open source empirical test-bed for investigating decision making

  • Easy to design new games

  • Built in functionality for conducting experiments with people

  • Over 30 publications

  • Freely available; extensive documentation

  • http://eecs.harvard.edu/ai/ct (or Google ”colored trails”)

    THANK [email protected]


  • Login