1 / 15

Future Information Access

Future Information Access. Joo-Hwee Lim IPAL @ I 2 R, Singapore. Current Information Access. Web, Desktop Search Engines (the war!) Google, Yahoo, Microsoft (all US giant companies) Why do we search? [EU Workshop Report on Challenges of Future Search Engines, Brussels, 15 Sep 2005]

taylor
Download Presentation

Future Information Access

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Future Information Access Joo-Hwee Lim IPAL @ I2R, Singapore

  2. Current Information Access • Web, Desktop Search Engines (the war!) • Google, Yahoo, Microsoft (all US giant companies) • Why do we search? [EU Workshop Report on Challenges of Future Search Engines, Brussels, 15 Sep 2005] • Search effectively replaces traditional ways of engaging with information • In both cultural and commercial terms, search and retrieval is becoming the most significant single facilitating technology, as important as electricity or transport • Context: E-Government, e-commerce, social live, relationships, leisure, security and intelligence, etc • Distinct commercial sector: Euro 40 billion a year by 2008 • Franco-German Project “Quaero” (hard to spell, also to seek)

  3. Information Need Current Search Paradigm Keywords Search Engine formulates,translates e.g. Google Web, Image, Earth relevant information for decision making • Potential problems: • semantic distortion • non-ubiquitous • input constraint generates TASK

  4. Future Information Access We live and move around in physical world, we want information and computing to be closer to our “continuous” part of our real lives

  5. Location-Based Reminder • Recall using current location and associative memory “Did we like the food when we were here last time?” Date: Dec 5 2004 - We just had a great sushi appetizer at this place!

  6. New “information”, New Query • Content (multimedia): enrich experience in life • Query (closer to task): invoke relevant experience • New characteristics of Query • proactive, transparent, multimodal • ex: current location, time, weather, network bandwidth (image or video), device (small display), body state (hungry), emotion (fun seeking), images of object/scene of interest, personal preference, calendar, relationships (extended preferences) etc

  7. Some possible scenarios • [commerce] on the way to business meeting: • Swatch store nearby; wife’s birthday; swatch collector; receive latest design on phone; send image of colleague’s watch as query etc • [relationship] revisit honeymoon location: • receive image/video of first date, wedding, honeymoon • [culture and education] a tourist in Vienna: • virtual experience of history, Mozart and his music etc; get more information by sending an image of the scene • [security] camera phone from a terrorist suspect: • images plus location trace to identify potential threat

  8. Collaborative Annotation • metadata production and sharing by Social Networks • e.g. propagate annotation based on visual similarity between two images captured in the same vicinity  “The Merlion Statue is a symbol of Singapore. It guards the entrance to the Singapore River.”  ?

  9. Scientific Axis 1 Contextual Multimodal Interaction for Mobile Information AccessCMIMIA PI Singapore: Joo-Hwee Lim PI France: Jean-Pierre Chevallet

  10. Motivation for CMIMIA People are on the move Mobile communication infrastructure and devices are becoming pervasive Ubiquitous Mobile Information Access will be key information activities in our daily lives Current information retrieval technologies (web search, desktop search) cannot provide adequate solutions Context:task, user profile, current location etc Multimodal:images, audio, text description Interaction:small display, multimedia, selection and adaptation of information

  11. content selection interaction history, profile ontology-based annotation Trusted Sites multimodal query/result collaborative Other Bloggers CMIMIA server annotation Context General Web annotation by examples content/context representation/learning Proposed Framework for mobile content query, consumption, and enhancement Convergence and beyond research of context IR, multimodal query and fusion, ontology adaptation and information selection for mobile interaction

  12. Snap2Tell(IR on small mobile device) • From image to text • reverse IR paradigm (use text to search images) • index is a set of images “describing” the object/scene • use image matching to select object, then return related text and audio description • Image matching issues • image processing on the phone • contextual pruning, backward reasoning etc • robust image matching (invariant to scales, rotation, perspective, illumination, occlusion etc)

  13. What’s Next? • New Related Projects • Snap2Go: image-based navigation assistant • MobiLog: context-aware mobile blog producer • New Research Challenges • Precise relevance versus surprise, discovery • Discriminative image semantics discovery • Personal context modeling: adaptive ontology • Multimodal interaction: query and data fusion • Social network to improve relevance: collaborative annotation

  14. Summing up….. “Information is becoming pervasive in real space, enchanting physical spaces with multimedia content that can deepen our experiences of them and making query and search into a more “continuous” part of our real lives”

More Related