1 / 26

C ONTENT-ORIENTED NEGOTIATION IN E-C OMMERCE

Bo ğaziçi University Department of Computer Engineering. C ONTENT-ORIENTED NEGOTIATION IN E-C OMMERCE. R eyhan Aydoğan Thesis Advisor: Asst. Prof. Pınar Yolum. OUTLINE. Negotiation Architecture Technical Details Representation Learning Phase Similarity Estimation

claire
Download Presentation

C ONTENT-ORIENTED NEGOTIATION IN E-C OMMERCE

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Boğaziçi University Department of Computer Engineering CONTENT-ORIENTED NEGOTIATION IN E-COMMERCE Reyhan Aydoğan Thesis Advisor: Asst. Prof. Pınar Yolum

  2. OUTLINE • Negotiation Architecture • Technical Details • Representation • Learning Phase • Similarity Estimation • Offering Service Mechanism • Developed System & Performance Evaluation • Discussion

  3. ConsumerAgent Producer Agent Negotiation Architecture Data Repository (Inventory Information) ? ? 4-Evaluatethe offer 1- Request 2-EvaluateRequest and Learning 3-Provide Service or Offer alternative 5-Accept or Re-request … … … <Preferences> <price v=low/> <speed v=high/> …………… </Preferences> SHARED ONTOLOGY N-negotiate and provide service

  4. Negotiation Challenges • Representation • Represent the request and offers • Learning • Learn about consumer’s preferences based on requests and counter offers • Similarity Estimation • Estimate similarity between the request and available services • Revision • Revise requests or offers based on incoming information

  5. Representation • The request of the consumer and the counter offer of the provider are represented as vectors. • Example domain • Service: Wine • Service features: winery, type of grape, sugar level, flavor, body of the wine, color of the wine, region • Example request or offer vector: (Bancroft, ChardonnayGrape, Dry, Moderate, Medium, White, NapaRegion) winery type of grape sugar level flavor body color region

  6. Learning Phase • Preferences: Relative importance degree of features of the service • Learn preferences over interactions: • Requires incremental learning algorithms • Learn preferences as concept: • Version Space as an inductive learning technique • Decision Trees

  7. Learning Phase: Version Space • Maintain two extreme hypotheses sets • The most general hypotheses • Initially every possible hypotheses is here • As the consumer rejects offers, this set is specialized • The most specific hypotheses • Initially empty • As the consumer makes requests, her requests are generalized and kept in this set • The goal: Obtain a single description

  8. Modified Version Space • To support to learn disjunctive concept • E.g. (red and strong wine) OR (rose and delicate wine) • Extend hypothesis language to support learning disjunctive concepts • Specialize general set minimally • General set involves all possible hypothesis. • Generalize specific set minimally • Specific set only includes positive samples.

  9. Decision Trees FLAVOR Acceptable Service: (Strong and Red) OR (Moderate and Rose) Strong Moderate Delicate COLOR COLOR - Rejectable Service: (Strong and Rose) OR (Moderate and Red) OR (Delicate) Red Rose Red Rose + - - +

  10. Offering Service • Random Offering Service • Offering service considering only the current request (SCR) • Offering Service using Version Space (VS) • Offering Service using Modified Version Space (MVS) • Offering Service using Decision Trees (DT)

  11. Offering Service using MVS • At the beginning, load all possible services (e.g. wine products) to the service list • After each request, train the MVS with request as a positive sample • If there is an exactly matched service, offer it • Otherwise, • Filter the service list with the most general set • Estimate the similarity of each services with the most specific set of learning component • Offer the most similar service

  12. Offering Service using DT • After each request, rebuild the decision tree • Remove the services from service list, which are classified as negative • Offer the most similar service to the all previous and current requests

  13. α *(common) • SMpq = • α *(common) + β* (difference) Tversky’s Similarity Measure • Terms: • Common: number of matched attributes • Different: number of unmatched attributes • α and β: Weights—Here α is equal to β • Example: • S1= ( Full, Strong, Red ) • S2= (Full, Delicate, Rose) SMs1s2 = 1 / 3

  14. Architectural Setup • Implementation in Java • Ontology language: OWL • Ontology Reasoner:Jena2 • Ontology • Shared ontology: modified version Wine ontology • Producer’s service ontology: “WineStock” extension of wine ontology

  15. Evaluating The Learning Phase • Criteria: Number of iterations for consensus • Five systems are compared • Similarity with Modified Version Space (SMVS) • System using Decision Trees (DT) • Similarity with Version Space (SVS) • Similarity with Current Request (SCR) • Random Offering (Random) • Use five scenarios • Run five times and take average of runs • Inventory that contains 19 available services

  16. Evaluating The Learning Phase Cont. • Scenario 1: • Preference of consumer: Any wine whose sugar level is dry • Availability in producer’s inventory: 15 products • Scenario 2: • Preference of consumer: Any wine, which is red and dry • Availability in producer’s inventory:Eight products • Scenario 3: • Preference of consumer: Any wine, which is red ,dry and moderate • Availability in producer’s inventory:Four products • Scenario 4: • Preference of consumer: Any wine, which is strong and red • Availability in producer’s inventory:Two products • Scenario 5: • Preference of consumer: Any wine whose flavor is strong and color is red or rose • Availability in producer’s inventory:Three products

  17. Evaluating The Learning Phase Cont. • Average number of iterations for five scenarios

  18. Similarity Measure • Tversky’s Similarity Measure • Proposed Semantic Similarity Measure (RP) • Resnik’s Semantic Similarity Measure • Lin’s Semantic Similarity Measure • Wu & Palmer’s Semantic Similarity Measure

  19. Thing WineColor ReddishColor White Red Rose RP Semantic Similarity • Parent versus Grandparent • Reddish Color is more similar than WineColor to Rose • Parent versus Sibling • WineColor is more similar than ReddishColor to White • Sibling versus Grandparent • Red is more similar than WineColor to Rose

  20. RP Semantic Similarity Cont. • Start the similarity with one at the node containing the first concept and decrease it by some constant at each level • Assume • m is the constant for parents • n is the constant for siblings

  21. Thing WineColor ReddishColor White Red Rose RP Semantic Similarity Sample • Rose-ReddishColor • 1 * (2/3) = 0.67 • Rose-Red • 1 * (4/7) = 0.57 • Rose-WineColor • 1* (2/3)*(2/3) = 0.45 • Rose-Thing • 1*(2/3)*(2/3)*(2/3)= 0.30 • Rose-White • 1*(4/7)*(2/3) = 0.38 • Assume • m=2/3 and n=4/7

  22. Evaluating Similarity Metrics • Scenario 1-7 : use dataset1 (19 services) • Scenario 8-10: use dataset2 (50 services) • Scenario 6-10: consider the hierarchical relation in preferences • Sample scenario 9: • expensive red wine, which is located around California region or cheap white wine, which is located in around Texas region.

  23. Evaluating Similarity Metric Cont. • Average number of iterations for ten scenarios

  24. General Results • Learning preferences shorten the negotiation duration • Usage of semantic similarity increases the performance when preferences are concerned • Using Modified Version Space or Decision Trees results in reasonable results.

  25. Contributions of thesis • A multi-issue negotiation mechanism based on the content of the service • Usage of ontologies so work with semantics • Extension of CEA Algorithm for disjunctive concepts • A new semantic similarity measure

  26. Future Work • Modeling producer’s preferences and business policy • The producer may prefer to provide some services over others • Integration of learning with ontology reasoning

More Related