1 / 80

Personalisation in Mobile Environment

Personalisation in Mobile Environment. Based on papers and presentations of Catholijn Jonker, Vagan Terziyan, Jan Treur, Oleksandra Vitko and others MIT Department, University of Jyväskylä. Based on: http://www.cs.jyu.fi/ai/papers/CIA-2003-2.pdf.

kaveri
Download Presentation

Personalisation in Mobile Environment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Personalisation in Mobile Environment Based on papers and presentations of Catholijn Jonker, Vagan Terziyan, Jan Treur, Oleksandra Vitko and others MIT Department, University of Jyväskylä

  2. Based on: http://www.cs.jyu.fi/ai/papers/CIA-2003-2.pdf Jonker C., Terziyan V., Treur J.,Temporal and Spatial Analysis to Personalize an Agent’s Dynamic Belief, Desire and Intention Profiles, In: M. Klush et al. (eds.), Cooperative Information Agents VII: Proceedings of the 7-th International Workshop on Cooperative Information Agents (CIA-2003), Helsinki, Finland, August 27-29, 2003, Lecture Notes in Artificial Intelligence, V. 2782, Springer-Verlag, pp. 289-315. • Catholijn Jonker • Department of Artificial Intelligence • Vrije Universiteit Amsterdam (the Netherlands) • jonker@cs.vu.nl • http://www.cs.vu.nl/~jonker • Vagan Terziyan • Department of Mathematical Information Technology • University of Jyväskylä (Finland) • vagan@it.jyu.fi • http://www.cs.jyu.fi/ai/vagan • Jan Treur • Department of Artificial Intelligence • Vrije Universiteit Amsterdam (the Netherlands) • treur@cs.vu.nl • http://www.cs.vu.nl/~treur

  3. Agent is Part of Intentional System • An agent is considered as a part of intentional system and thus it is an entity which appears to be the subject of beliefs, desires, intentions, etc; • An agent is assumed to decide to act and communicate based on its beliefs about its environment and its desires and intentions. These decisions, and the intentional notions by which they can be explained and predicted, generally depend on circumstances in the environment, and, in particular, on the information on where and when these circumstances acquired.

  4. Where We Might Need Spatial Considerations in an Intentional System ? • … in many applications where agent’s intentional profile essentially depends on its location in the environment, e.g.: • for adaptive location-based “Push” services for mobile customers; • for intelligent tracking of terrorists; • … • etc.

  5. Adaptive Location-Based “Push” Services for Mobile Customers • It will be very helpful to have capabilities to predict in which places of the environment certain desires or intentions of a customer are likely to arise, to stimulate the arising of these intentions by providing the occurrence of circumstances that are likely to lead to them.

  6. Intelligent Tracking of e.g. Terrorists, etc. • Also we assume that it may be very helpful to have capabilities to predict in which places of the environment certain inappropriate desires or intentions are likely to arise, either: • to avoid the arising of these intentions by preventing the occurrence of circumstances that are likely to lead to them, or • if these circumstances cannot be avoided, by anticipating consequences of the intentions.

  7. Basic Ontologies Used • Actual state of the external world: EWOnt; • Observation observation_result(p) and communication (communicated_by(p, C) ) input of an agent: InOnt ; • Output of an agent (decisions to do actions): OutOnt ; • Agent internal ontology: IntOnt ; • Agent interface ontology:InterfaceOnt =InOntOutOnt ; • Agent ontology: AgOnt =InterfaceOntIntOnt; • Overall ontology: OvOnt =EWOntAgOnt . p – a state property of the external world; C – an agent who provides information about state property.

  8. Overall Trace • The set of possible states of the overall ontology: IS(OvOnt). • An overall traceM over a time frame T is a sequence of states over the overall ontology OvOntover time frame T:(Mt)t Tin IS(OvOnt) . • States of agent A input / output interfaces and internal state at time t, given an overall trace M: • state(M, t, input(A)); • state(M, t, output(A)); • state(M, t, internal(A)).

  9. Temporal Belief Statement • An agent believes a fact if and only if it received input about it in the past and the fact is not contradicted by later input of the opposite. MWt1 [ (M , t1)t0 ≤ t1 [ Input(, t0, M ) t  [t0, t1] Input(~, t, M ) ] ] Temporal belief statement

  10. Desires • Given a desire, for each relevant action there is an additional reason, so that if both the desire is present and the agent believes the additional reason, then the intention to perform the action will be generated. • Every intention is based on a desire, i.e., no intention occurs without desire

  11. Intentions and Actions(temporal vs. spatial) • Under appropriate circumstances an intention leads to an action: • an agent who intends to perform an action will execute the action immediately when an opportunity occurs; • an agent who intends to perform an action will execute the action at the nearest known place where an opportunity occurs.

  12. Satisfaction Relation •  is true in this state at time t: state(M, t, input(A)) |=  , where SPROP(InOnt) state properties based on ontology

  13. Spatial Language Elements • Location (x, y) has property p:location_has_property(x, y, p) ; • Agent A is present at location (x, y):location_has_property(x, y, present (A)) ; • For example: state(M, t, input(A)) |=observation_result(location_has_property(x, y, p)) • means that at time t the agent A's input has information that it observed that location (x, y) has property p.

  14. A Route • A routeR is defined as a mapping from distances d (on the route) to locations (x, y), • e.g., after 300 m on the route home_from_school you are at location (E, 5) on the map. at_ distance_at_location (R, d, x, y) at_ distance_at_location (home_from_school, 300, E, 5)

  15. Associated Route of “Walking” Agent • A trace M of an agent walking in a city specifies an associated route R(M) as follows: at route after distance d you are at location (x, y) if and only if a time point t exists such that at t agent A has walked d from the start and is present at location (x, y). at_ distance_at_location (R(M), d, x, y)   t [ state(M, t, EW) |= distance_from_start(d)   location_has_property(x, y, present(A)) ]

  16. Temporal vs. Spatial Factors (Case 1) • Person A (1981-1986 M.Sc. studies on Applied Mathematics; 1987-2000 – Ph.D. studies on Artificial Intelligence; 2001-2002 – Project Work on Ontology Engineering); • Person B (M.Sc. studies on Applied Mathematics in University of Jyvaskyla; Ph.D. studies on Artificial Intelligence in Massachusetts Technological Institute; Project Work on Ontology Engineering in Vrije Universiteit Amsterdam). • spatial history here (i.e. second description) seems to be more informative than temporal history in a reasonable context.

  17. Temporal vs. Spatial Factors (Case 2) • Person A (10:00 wants a cup of coffee; 15:00 wants to eat; 19:00 wants to watch TV News; 23:00 wants to sleep); • Person B (wants a cup of coffee in train “Jyvaskyla-Helsinki” near Pasila station; wants to eat in Helsinki University Conference Room; wants to watch TV News in the Irish Pub in Downtown Helsinki; wants to sleep in “Scandic” Hotel). • alternatively temporal history here (i.e. first description) seems to provide more information about a person than the spatial one in a reasonable context.

  18. Predicting Agent’s States based on Spatial History Given - set of routes M for the agent with observed agent state in different location points of each route; task – online prediction of agents next locations, BDI attributes and states for a new route.

  19. Relationships between BDI Notions

  20. Mobile Commerce (Location-Based Service) • Agent – mobile customer. • Agent’s location – can be tracked by positioning infrastructure. • Observable agents actions – e.g. clickstream (points of interest) on a map delivered to the mobile terminal, calls and downloads of information about points of interest, appropriate orders, reservations, payments, etc. - can be tracked by Location-Based Service (LBS).

  21. Mobile Location-Based Service (advanced personalization)

  22. Positioning Service Geographical, spatial data Mobile network Location-Based Service Personal Trusted Device Location-based data: (1) services database (access history); (2) customers database (profiles) Architecture of LBS system

  23. Positioning systems Cellular network based positioning Satellite-based positioning

  24. Opening a Connection to Location Service

  25. Request and Receive Map from the Location Service

  26. Selecting Point of Intereston the Map

  27. Receiving Information Content Related to Point of Interest

  28. Contextual and Predictive Attributes Predictive attributes Contextual attributes Mobile customer description Ordered service

  29. Challenges of prediction in mobile environment • Goal of classification (prediction) in mobile e-commerce environment • Given: customer’s profile and location features • Goal: predict customer’s next order (based on prediction oh his beliefs, desires, intentions, etc.) • Main Subtasks in prediction process • Feature selection • Distance evaluation • Classification (Prediction)

  30. Feature Selection:to find the minimally sized feature subset that is sufficient for correct classification of the instance Sample Instances Sample Instances Feature Selector

  31. Distance Evaluation:to measure distance between instances based on their numerical or nominal attribute values Distance Evaluator

  32. Distance between Two Instances with Heterogeneous Attributes (e.g. Preferences) where:

  33. Simple distance between Two Preferences with Heterogeneous Attributes (Example) Wine Preference 1: I prefer white wine served at 15° C where: Wine Preference 2: I prefer red wine served at 25° C Importance: Wine color: ω1 = 0.7 Wine temperature: ω2 = 0.3 d (“white”, “red”) = 1 d (15°, 25°) = 10°/((+30°)-(+10°)) = 0.5 D (Wine_preference_1, Wine_preference_2) = √ (0.7• 1 + 0.3 • 0.5) ≈ 0.922

  34. Advanced distance between Two Preferences with Heterogeneous Attributes (Example) - 1 Domain objects: 1000 drinks; 300 red, 500 white, 200 - other Soft drinks: 600; 100 red, 400 white, 100 - other where: Wines: 400; 200 red, 100 white, 100 - other P(soft_drink|colour = white) = = 400 / 500 = 0.8 P(wine|colour = white) = = 100 / 500 = 0.2 P(soft drink|colour = red) = = 100 / 300 = 0.33 P(wine|colour = red) = = 200 / 300 = 0.67

  35. Advanced distance between Two Preferences with Heterogeneous Attributes (Example) - 2 P(wine|colour = white) = = 100 / 500 = 0.2 P(wine|colour = red) = = 200 / 300 = 0.67 where: P(soft_drink|colour = white) = = 400 / 500 = 0.8 P(soft drink|colour = red) = = 100 / 300 = 0.33 d (“white”, “red”) = √ [(P(soft_drink|colour = white) -P(soft drink|colour = red))2 + + (P(wine|colour = white) -P(wine|colour = red))2 ] = = √ [(0.8 –0.33)2 + (0.2 –0.67)2 ] ≈ 0.665 D (Wine_preference_1, Wine_preference_2) = √ (0.7• 0.665 + 0.3 • 0.5) ≈ 0.784

  36. Classification (Prediction): to predict class for a new instance based on its selected features and its location relatively to sample instances Feature Selector Sample Instances Classification Processor Distance Evaluator

  37. Reed more in: Terziyan V., Dynamic Integration of Virtual Predictors, In: L.I. Kuncheva, F. Steimann, C. Haefke, M. Aladjem, V. Novak (Eds), Proceedings of the International ICSC Congress on Computational Intelligence: Methods and Applications- CIMA'2001, Bangor, Wales, UK, June 19 - 22, 2001, ICSC Academic Press, Canada/The Netherlands, pp. 463-469. http://www.cs.jyu.fi/ai/papers/virtual_predictors.pdf Puuronen S., Tsymbal A., Terziyan V., Distance Functions in Dynamic Integration of Data Mining Techniques, In: B.V. Dasarathy (ed.), Data Mining and Knowledge Discovery: Theory, Tools and Technology II, Proceedings of SPIE, Vol.4057, The Society of Photo-Optical Instrumentation Engineers, USA, 2000, pp. 22-32. http://www.cs.jyu.fi/~alexey/distance.pdf Skrypnik I., Terziyan V., Puuronen S., Tsymbal A., Learning Feature Selection for Medical Databases, In: Proc. of the 12th IEEE Symposium on Computer-Based Medical Systems CBMS'99, Stamford, USA, June 1999, IEEE CS Press, pp. 53-58. http://dlib.computer.org/conferen/cbms/0234/pdf/02340053.pdf

  38. Prediction of Customer’s Actions here I had massage here I had nice wine here I washed my car d1 d2 d3 here I had great pizza d4 here I made hair d5 I am here now. There are my recent preferences: 1. I need to wash my car: 0.1 2. I want to drink some wine: 0.2 3. I need a massage: 0.2 4. I want to eat pizza: 0.8 5. I need to make my hair: 0.6 Make a guess what I will order now and where !

  39. Contextual Effect on Conditional Probability (1) X x5 x6 x7 x2 x3 x4 x1 contextual attributes predictive attributes Assume conditional dependence between predictive attributes (causal relation between physical quantities)… xt … some contextual attribute may effect directly the conditional dependence between predictive attributes but not the attributes itself xk xr

  40. Contextual Effect on Conditional Probability (2) Xt1 : I am in Paris Xt2 : I am in Moscow xt Xk1 : order flowers Xk2 : order wine Xr1 : visit football match Xr2 : visit girlfriend xr xk Xr:Make a visit Xk:Order present

  41. Contextual Effect on Conditional Probability (3) Xt1 : I am in Paris Xt2 : I am in Moscow xt xr xk

  42. Contextual Effect on Conditional Probability (4) • X ={x1, x2, …, xn} – predictive attribute with n values; • Z ={z1, z2, …, zq} – contextual attribute with q values; • P(Y|X) = {p1(Y|X), p2(Y|X), …, p r(Y|X)} – conditional dependence attribute (random variable) between X and Y with r possible values; • P(P(Y|X)|Z) – conditional dependence between attribute Z and attribute P(Y|X);

  43. Contextual Effect on Unconditional Probability (1) X x5 x6 x7 x2 x3 x4 x1 contextual attributes predictive attributes Assume some predictive attribute is a random variable with appropriate probability distribution for its values… xt P(Xk) … some contextual attribute may effect directly the probability distribution of the predictive attribute Xk x1 x4 x2 x3 xk

  44. Contextual Effect on Unconditional Probability (2) Xt1 : I am in Paris Xt2 : I am in Moscow xt P1(Xk) P2(Xk) 0.7 0.5 0.3 Xk Xk 0.2 Xk1 Xk2 Xk1 Xk2 Xk1 : order flowers Xk2 : order wine xk Xk:Order present

  45. Contextual Effect on Unconditional Probability • ·X ={x1, x2, …, xn} – predictive attribute with n values; • ·Z ={z1, z2, …, zq} – contextual attribute with q values and P(Z) – probability distribution for values of Z; • P(X) = {p1(X), p2(X), …, pr(X)} – probability distribution attribute for X (random variable) with r possible values (different possible probability distributions for X) and P(P(X)) is probability distribution for values of attribute P(X); • ·P(Y|X) is a conditional probability distribution of Y given X; • ·P(P(X)|Z) is a conditional probability distribution for attribute P(X) given Z

  46. Causal Relation between Conditional Probabilities xm xn P(P(Xn| Xm)) P(Xn| Xm) P2(Xn|Xm) P3(Xn|Xm) P1(Xn|Xm) P(P(Xr| Xk)|P(Xn| Xm)) P(P(Xr| Xk)) There might be causal relationship between two pairs of conditional probabilities P(Xr| Xk) P2(Xr|Xk) P1(Xr|Xk) xk xr

  47. Two-level Bayesian Metanetwork for managing conditional dependencies

  48. Example of Bayesian Metanetwork The nodes of the 2nd-level network correspond to the conditional probabilities of the 1st-level network P(B|A) and P(Y|X). The arc in the 2nd-level network corresponds to the conditional probability P(P(Y|X)|P(B|A))

  49. 2-level Relevance Bayesian Metanetwork (for modelling relevant features’ selection)

  50. General Case of Managing Relevance (1) Predictive attributes: X1 with values {x11,x12,…,x1nx1}; X2 with values {x21,x22,…,x2nx2}; … XN with values {xn1,xn2,…,xnnxn}; Target attribute: Y with values {y1,y2,…,yny}. Probabilities: P(X1), P(X2),…, P(XN); P(Y|X1,X2,…,XN). Relevancies: X1 = P((X1) = “yes”); X2 = P((X2) = “yes”); … XN = P((XN) = “yes”); Goal: to estimate P(Y).

More Related