1 / 32

EPFL’s AI Laboratory

EPFL’s AI Laboratory. Meeting at the University Nancy 2 – 11 Oct 2006. EPFL in some numbers…. Ecole Polytechnique Fédérale de Lausanne Founded in 1879 as part of university Since 1969, one of the two federally funded university in Switzerland In total: ~10’000 people

Download Presentation

EPFL’s AI Laboratory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EPFL’s AI Laboratory Meeting at the University Nancy 2 – 11 Oct 2006

  2. EPFL in some numbers… • Ecole Polytechnique Fédérale de Lausanne • Founded in 1879 as part of university • Since 1969, one of the two federally funded university in Switzerland • In total: ~10’000 people • Annual budget from Swiss government ~380M€ • >5000 bachelor and master students in 13 domains • >1000 doctoral students • >250 research faculties • >100 nationalities

  3. Computing at EPFL • The School of Computer and Communication Sciences at EPFL is one of the major European centers of teaching and research in information technology • 34 professors • 200 Ph.D students • >1000 bachelor and master students • > 8M€ research external funding.

  4. The AI lab at EPFL • Director: Prof Boi Faltings • Software agent • Case based reasoning • Constraint-based reasoning • Recommender Sytems • 3 Teaching Professors • Development of a research and teaching activity in the domain of Natural Language. • Numerical constraint satisfaction problems  • 2 Post-docs • In charge of European project: CASCOM, DIP, Knowledge web • 12 Phd students • http://liawww.epfl.ch/

  5. Agent 1 Agent i Agent k Constraint Solver x1 x2 x3 x4 x5 Artificial Intelligence • Definition 1: • Software to mimic human behavior • Definition 2: • Software to make people more intelligent • In particular, for artificial domains: accounting, design, planning, coordination… • Our vision of AI: • Combine expertise and concerns of many parties to solve problems that no individual can combine

  6. Open Issues • Network leads to unbounded, dynamic problems => distributed problem-solving • Make agent incentives compatible to discourage manipulation • Model and reason with people’s preferences

  7. Preference Elicitation – P. Viappiani • Objective: Build interactive tools that help users search for their most preferred item in a large collection of options (Preference-based search). • Consider example-critiquing, a technique for enabling users toincrementally construct preference models by critiquing exampleoptions that are presented to them. • Investigate techniques for generating automatic suggestions considering the uncertainty of the preference model and heuristics

  8. Preference Elicitation – V. Schickel • Objective: Try to estimate missing preferences from an incomplete elicited user model and see how much preferences can be transferred. • Study how ontology can be used to model user’s preference model and model e-catlog product. • Investigate inference technique to “guess” missing preferences. • Build more robust similarity metric for hierarchical ontologies.

  9. Reputation Mechanism – R. Jurca • Objective: build of reputation mechanisms for online environments where agents do not a priory trust each other. • The reputation of an agent is obtained by aggregating feedback • While most of the previous results assume that agents report feedback honestly, we explicitly consider rational reporting incentives and guarantee that truth-telling is in the best interest of the reporters. Carefully designed payment schemes (agents get paid when reporting feedback) insure that truth-telling is a Nash equilibrium: as long as other agents report honestly, no reporter can gain by lying.

  10. Reputation Mechanism – Q. Ngyen • Objective: find local search algorithms that achieve good performance while satisfying the incentive compatibility for bounded-rational agents. • Studying randomized algorithms and local search algorithms. • Main contributions including a local search algorithm called Random Subset optimization algorithm and an incentive compatible and budget-balanced protocol called leave-one-out protocol for bounded-rational agents.

  11. Distributed constraint optimization – A. Petcu • Objective: Develop a Multiagent Constraint OPtimization (MCOP) for solving numerous practical problems like planning but distributed on many agents • Developing a mechanism to build MCOP in a linear number of messages (DPOP). • Study dynamic environments (problems can change over time) and a self-stabilizing version of DPOP that can be applied in and techniques that maintain privacy.

  12. Using an Ontological A-priori Score to Infer User’s Preferences Advisor: Prof Boi Faltings – EPFL

  13. Presentation Layout • Introduction • Introduce the problem and existing techniques • Transferring User’s Preference • Introduce the assumptions behind our model • Explain the transfer of preference • Validation of the model • Experiment on MovieLens • Conclusion • Remarks & Future work

  14. Problem Definition • Recommendation Problem (RP): Recommend a set of items I to the user from a set of all items O, based on his preferences P. • Use a Recommender System, RS, to find the best items • Examples: • NotebookReview.com (O=Notebooks, P= criteria (Processor Type, Screen Size)) • Amazon.com (O=Books, DVDs,… , P= grading) • Google (O=Web Documents, P= keywords)

  15. Recommendation Systems • Three approaches to build a RS: [1][2][3][4][5] • Case-Based Filtering: uses previous cases i.e.: Collaborative Filtering (cases – user’s ratings) • Good performances – low cognitive requirements • Sparsity, latency, shilling attacks and cold start problem • Content-Based Filtering: uses item’s description i.e.: Multi-Attribute Utility Theory (descriptions-attributes) • Match user’s preferences – very good precision • Elicitation of weights and value function. • Rule-Based Filtering: uses association between items i.e.: Data Mining (associations – rules) • Find hidden relationships – good domain discovery • Expensive and time consuming

  16. A Major Problem in RS: The Elicitation Problem => Incomplete user’s model • Collaborative Filtering • Multi-Attribute Utility Theory I134 4 I245 3 I55 4 I4 5 Central Problem of RS

  17. Presentation Layout • Introduction • Introduce the problem and existing techniques • Transferring User’s Preference • Introduce the assumption behind our model • Explain the transfer of preference • Validation of the model • Experiment on MovieLens • Conclusion • Remarks & Future work

  18. Transport On-land On-sea Vehicle Boat <7 >6 Car Bus City All_terrain SUV Compact Ontology D1 Ontology λ is a graph (DAG) where • nodes models concepts • Instances being the items • edges represents the relations (features). • Sub-concepts are distinguished by certain features • Feature are usually not made explicit

  19. S is a function that satisfies the assumptions: • A1: S depends on the features of the item • Items are models by a set of features • A2: Each feature contributes independently to S • Eliminates the inter-dependence between features • A3: unknown|disliked features make no contribution • Reflects the fact that users are risk-averse • Liking a concept liking a sub-concept The Score of Concept -S • The RP viewed as predicting the scoreS assigned to a concept (group of items). • The score can be seen as a lower bound function that models how much a user likes an item

  20. E(c)= ∫xfc(x)dx = 1 leafs n+2 1 APS 0,5 • APS(c)= root n+2 #descendants A-priori Score - APS • The structure of the ontology contains information • Use APS(c) to capture the knowledge of concept c • If no information, assume S(c) uniform [0..1] • P(S(c)>x)=1-x • Concepts can have n descendants • Assumption A3 => P(S(c)>x)=(1-x)n+1 • APS uses no user information

  21. Select the closest concept to bus. i.e.: find most similar concept – IJCAI’07 Inference Idea Select the best Lowest Common Ancestor lca(SUV, bus) – AAAI’06 Vehicle Car Bus S(bus)=??? SUV Utilities S(SUV)=0.8 Pickup S(Pickup)=0.6

  22. Upward Inference A1the score depends on the features of the item • Going up k levels ⇒ remove k known features vehicle K levels SUV • Removing features ⇒ S↘ or S ↔ (S =∑S) • S( vehicle | SUV)= α( vehicle, SUV) * S(SUV) • α ∈[0..1] is the ratio of feature in common liked • How to compute α? • α =#feature(vehicle) / #feature(SUV) • Does not take into account the feature distribution • α =APS(vehicle) / APS(SUV)

  23. A3 Users are pessimistic liking some features liking others Downward Inference A2Features contributes independently to the score • Going down l levels ⇒ adding l unknown features vehicle l levels bus • Adding features ⇒ S↗ or S↔ (S =∑S) S(bus|vehicle)=α S(vehicle) α≥1 ⇏ • S(bus|vehicle)= S(vehicle) + β(vehicle, bus) • β∈[0..1] is ∑features in bus not present in vehicle • How to compute β? • β= APS(bus) - APS(vehicle)

  24. Elicited from the user Use APS Overall Inference • There exist a chain between “city” and vehicle but not a path Vehicle • As for Bayesian Networks, we assume independence Car Bus • S(Bus|SUV)= αS(SUV) + β SUV • The score of a concept x knowing y is defined as: S(y|x)= α(x,lcax,y)S(x) + β(y,lcax,y) • The score function is asymmetric

  25. Presentation Layout • Introduction • Introduce the problem and existing techniques • Transferring User’s Preference • Introduce the assumption behind our model • Explain the transfer of preference • Validation of the model • WordNet (built best similarity metric – see IJCAI’07) • Experiment on MovieLens • Conclusion • Remarks & Future work

  26. Validation – Transfer - I • MovieLens database used by CF community: • 100,000 ratings on 1682 movies done by 943 users. • MovieLens – movies are modeled by 23 Attributes • 19 themes, MPPA rating, duration, and released date. • Extracted from IMDB.com • Built an ontology modeling the 22 attributes of a movies • Used definitions found in various online dictionaries

  27. Validation – Transfer - II • Experiment Setup – for each 943 users • Filtered users with less than 65 ratings • Split user’s data into learning set and test set • Computed utility functions from learning set • Frequency count algorithm for only 10 attributes • Our inference approach for other 12 attributes • Predicted the grade of 15 movies from the test set • Our approach – HAPPL (LNAI 4198 – WebKDD’05) • Item-Item based CF (using adjusted Cosine) • Popularity ranking • Computed the accuracy of predictions for Top 5 • Used the Mean Absolute Error (MAE) • Back to 3 with a bigger training set {5,10,20,…,50}

  28. Validation – Transfer - III

  29. Validation – Transfer - IV

  30. 1 n+2 Conclusions • We have introduced the idea that ontology could be used to transfer missing preferences. • Ontology can be used to compute A-priori score • Inference model - asymmetric property • Outperforms CF without other people information • Requirements & Conditions: • A2 - Features contributes to preference independent. • Need an ontology modeling all the domain • Next steps: Try to learn the ontology • Preliminary results shows that we still outperform CF • Learn ontology gives a more restricted search space

  31. References - I [1] Survey of Solving Multi-Attribute Decisions Problems Jiyong Zang, and Pearl Pu, EPFL Technical Report, 2004. [2] Improving Case-Based Recommendation A Collaborative Filtering Approach Derry O’Sullivan, David Wilson, and Barry Smyth, Lecture Notes In Computer Science, 2002. [3] An improved collaborative Filtering approach for predicting cross-category purchases based on binary market data. Andreas Mild, and Thomas Reutterer, Journal of Retailing and Consumer Services Special Issue on Model Building in Retailing & consumer Service, 2002. [4] Using Content-Based Filtering for Recommendation Robin van Meteren and Maarten van Someren, ECML2000 Workshop, 2000. [5] Content-Based Filetering and Personalization Using Structure Metadata A. Mufit Ferman, James H. Errico, Peter van Beek, and M Ibrahim Sezan, JCDL02, 2002.

  32. References - II [AAAI’06] Inferring User’s Preferences Using Onotlogies Vincent Schickel and Boi Faltings, In Proc. AAAI’06 pp 1413 – 1419, 2006. [IJCAI’07] OSS: A Semantic Similarity Function based on Hierarchical Ontologies Vincent Schickel and Boi Faltings, To appear in Proc. IJCAI’07. [LNAI 4198] Overcoming Incomplete User Models In Recommendation Systems via an Ontology. Vincent Schickel and Boi Faltings, LNAI 4198, pp 39 -57, 2006. Thank-you Slides: http://people.epfl.ch/vincent.schickel-zuber

More Related