1 / 51

A Study of Awareness in Multimedia Search

A Study of Awareness in Multimedia Search. Robert Villa, Nick Gildea, Joemon Jose Information Retrieval Group April 2008. Overview. Introduction Collaboration and awareness in search Research questions Experimental study Inducing awareness: a game scenario Video retrieval

elinor
Download Presentation

A Study of Awareness in Multimedia Search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Study of Awareness in Multimedia Search Robert Villa, Nick Gildea, Joemon Jose Information Retrieval Group April 2008

  2. Overview • Introduction • Collaboration and awareness in search • Research questions • Experimental study • Inducing awareness: a game scenario • Video retrieval • Demo of multimedia retrieval system • Some results • Conclusions

  3. Information Retrieval • Deals with practical and theoretical models of searching unstructured collections of documents • Idealised aim: supply the system with a natural language description of your need • The system returns a ranked list of the documents relevant to your need • Process is naturally probabilistic and uncertain

  4. Video retrieval • Video retrieval systems index and search collections of videos • Like traditional IR, the indexing of the video data is assumed to be automatic • Extraction of visual or audio features • Use of automatic speech recognition • Queries are typically textual or by example

  5. Example interface

  6. Video retrieval • Videos are automatically split into ‘shots’ (shot segmentation) • Shot boundaries are determined using the visual content of video frames • Each shot is a short element of a video • in TRECVID 2006, typically 2 to 3 seconds, although there are some much longer shots • Shots are the element of retrieval • Where a text system retrieves documents, video retrieval systems retrieve shots

  7. Example of a shot • Every shot has an associated text transcript: • E.g. “A dramatic arrival” • Generated by Automatic Speech Recognition (ASR) • Transcript can often be very wrong

  8. Collaborative Retrieval • Most current search systems assume searching is a solitary activity • Is this always the case, or can collaborative searching with one or more others be effective? • Rather than focus on collaboration in general, we decided to look at only one aspect of collaboration - awareness

  9. Awareness • Awareness enables an “understanding of the activities of others”, an important aspect of collaboration • Paul Dourish and Victoria Bellotti. “Awareness and Coordination in Shared Workspaces”, CSCW'92 • Scenario: • Two users are searching on the same task at the same time in different places • Synchronous and remote

  10. Previous work – collaborative search • Cerchiamo (FXPAL, Xerox) • Adcock et. al., in TRECVID 2007 • Two people collaborating, one a “gatherer” and the other a “reviewer” • SearchTogether (Microsoft) • Morris, M. R. (2007) • Provides a messaging system, recommending a web page to another, query awareness, etc. • Fischlar-DiamondTouch (DCU) • Smeaton et. al. (2007) • Table-top display which allows two people to work around it

  11. Research question • Can awareness of another searcher aid a user when carrying out a multimedia search? • Will their performance increase? • Will less effort be needed to reach a given performance? • Shots played, browsing required • Will the user’s search behaviour change? • Number of queries executed, shots found independently

  12. Competitive game scenario • We wanted to evaluate the effect of awareness in a “best case” scenario • i.e. a situation where there was some benefit to users in being aware of another’s actions • A competitive game scenario was used, where pairs of users competed to “win” the search tasks

  13. Aim of the ‘game’ • The aim of the ‘game’ was to find as many relevant shots as possible for the task • Domain was video retrieval, where users had to search a video collection for ‘shots’ • Whoever finds the most shots ‘wins’ • A monetary award was given to the winner

  14. System • Our existing video retrieval system was modified to allow collaboration • Each user could be given a view of the other user’s search screen • This was designed to work with two monitors: • The user’s own search interface on one screen • The other screen optionally showing the other user’s search screen • We supported 4 different situations

  15. “Mutually Aware” User B User A A can see B’s screen and B can see A’s screen

  16. “A aware of B” User B User A A can watch B’s screen while B cannot watch A

  17. “B aware of A” User B User A B can watch A’s screen while A cannot watch B

  18. “Independent” User B User A Both A and B cannot watch each other

  19. System interface

  20. Local search interface

  21. Text Query

  22. Search results

  23. Shots to use in relevance feedback

  24. Result shots for the user

  25. Remote search interface

  26. User cannot see the other user’s final results – only a count of the number of shot currently marked by the user

  27. This screen does not update automatically, theuser must pressthe “Refresh”button to update the screen

  28. Video browser • Simple video browser pop’s up when the user clicks a keyframe • Allows the user to view the shot, and move backwards and forwards in the video

  29. Conditions • From the point of view of an individual user: • Working independently • Cannot watch the other user, and knows that the other user can watch him/her • Can watch the other user, and knows that the other user cannot watch him/her • Can watch the other user, and knows that the other user can watch him/her

  30. TRECVID 2006 Collection • Almost 260 hours of mostly news data from the end of 2005 • CNN, LBC, CCN, etc. • Multilingual (English, Chinese and Arabic) • Has a standard shot segmentation • ASR transcripts provided • For Chinese and Arabic video, also automatically translated into English

  31. TRECVID 2006 Topics • 24 topics, of which we used the 4 worst performing overall from the interactive track • Hoped that these would be a similar challenge for the user • Adcock et al. (2007) found that users collaborated better on difficult tasks

  32. Topics

  33. Experimental design • Within user study was carried out • Latin square design • 4 tasks • 4 conditions • 24 users (12 pairs)

  34. Procedure • Users took part in pairs • Users had 15 minutes to find as many shots as possible • At the end of the 4 tasks, the “winner” was announced • Each user was paid £10 • Winner got an extra £5, shared if there was a draw

  35. Results • 12 competitive runs • 11 wins and 1 draw • And there was an immediate issue with one of the user’s ...

  36. Search performance

  37. Search Performance • No significant difference found between the level of performance in the four different conditions • Overall performance was very low (typical in video IR, for these hard topics) • Performance does vary widely across the four tasks • Tasks 189 and 192 performed worst

  38. Search behaviour: queries • Do users execute more queries searching alone? • Significant variation between watching and Independent was found

  39. Number of shots found independently • Significant interaction was found between Independent and Watching

  40. Changes in search behaviour • Users searched less when watching someone else • Also less searching executed in the watched condition (not significant) • Users found more shots themselves when watching someone else • Also found more shorts in the mutual and watched conditions, but not significant

  41. Search terms used • One possible way awareness may help is by providing a user with new terms with which to use in queries • Did users copy search terms from the remote user? • We could not directly record this in the logs (terms are easily retyped)

  42. Estimating copied search terms • Search terms which could have been copied were derived from the logs • Method: • Found the set of common terms • Found who used that term first • Checked for a click of the “refresh” button by the user who was second • Assumed that second user could have then copied that term

  43. Copied terms • Suggests that a user is able to reuse search terms used by the other user

  44. Searcher effort

  45. Searcher effort • Recorded three types of events to gauge searcher effort • Play events when a user clicks a shot • Move to next shot in video • Move to previous shot in video • Only significant relationship: • Watching and Watched and move to previous shot

  46. Where did a user’s final results come from? • From the interface, we logged the user dragging and dropping shots between the different parts of the interface • We could record when someone copied a shot from the other user • Using this, we can estimate where user’s got their final results • (roughly!)

  47. Conclusions • Despite the game scenario, users didn’t copy other people’s shots much • This came as something as a surprise • There’s no significant increase in a user’s performance • Only a trend ... • There is evidence that user’s do reuse search terms • 10 and 13% of terms are potentially copied

  48. Conclusions • Results from user effort were unclear • Only significantly less interaction in one event

More Related