1 / 16

HOW PEOPLE GO ABOUT SEARCHING VIDEO COLLECTIONS: LESSONS LEARNED

HOW PEOPLE GO ABOUT SEARCHING VIDEO COLLECTIONS: LESSONS LEARNED. presented by Barbara M. Wildemuth School of Information & Library Science University of North Carolina at Chapel Hill. VIDEO versus TEXT. Searching video collections is just like searching collections of text documents.

barrys
Download Presentation

HOW PEOPLE GO ABOUT SEARCHING VIDEO COLLECTIONS: LESSONS LEARNED

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HOW PEOPLE GO ABOUT SEARCHING VIDEO COLLECTIONS: LESSONS LEARNED presented by Barbara M. Wildemuth School of Information & Library Science University of North Carolina at Chapel Hill

  2. VIDEO versus TEXT • Searching video collections is just like searching collections of text documents. • Searching video collections is nothing like searching collections of text documents.

  3. The research model

  4. The research model

  5. Tasks associated withvideo retrieval • Select items from the collection • Select particular clips or frames • Evaluate the style of the video • All oriented toward re-use of video retrieved

  6. The research model

  7. What is the role of visual surrogates?

  8. Which surrogates are most effective? • Storyboard with text keywords • Storyboard with audio keywords • Slide show with text keywords • Slide show with audio keywords • Fast forward

  9. How fast is too fast?

  10. AgileViews framework • User should be able to move from one view to another quickly and easily • Overview • Preview • History view • Peripheral view • Shared view

  11. The research model

  12. The role of video structure • What is narrativity? • Cause and effect across scenes • Persistent characters • BOTH • The next step: use video structure to “tune” the video retrieval system

  13. Methodological contributions • Measures of performance • Object recognition (text, graphical) • Action recognition • Linguistic gist comprehension (full text, multiple choice) • Visual gist comprehension (multi-faceted) • Measures of user perceptions • Perceived ease of use • Perceived usefulness • Flow (concentration, enjoyment)

  14. Next research questions • Do the current findings hold up outside the lab? • What role might sound/audio play in helping people to select useful videos from a collection? How can attributes of the sound track be represented in a surrogate? • What tools are needed to support re-use of the video retrieved?

  15. Your questions? • More details at: http://www.open-video.org/ • Acknowledgements to the Open Video team: • Gary Marchionini --Gary Geisler • Xiangming Mu --Meng Yang • Michael Nelson --Beth Fowler Jon Elsas, Rich Gruss, Anthony Hughes, Jie Luo, Sanghee Oh, Amy Pattee, Terrell Russell, Laura Slaughter, Richard Spinks, Christine Stachowitz, Tom Tolleson, TJ Ward, Curtis Webster, Todd Wilkens

  16. Studies cited today • Geisler, G., Marchionini, G., Nelson, M., Spinks, R., & Yang, M. (2001). Interface concepts for the Open Video Project. Proceedings of the Annual Conference of the American Society for Information Science & Technology, 58-75. http://www.ischool.utexas.edu/~geisler/info/p514-geisler.pdf • Hughes, A., Wilkens, T., Wildemuth, B., & Marchionini, G. (2003). Text or pictures? An eyetracking study of how people view digital video surrogates. Proceedings of International Conference on Image and Video Retrieval (CIVR), 271-280. http://www.open-video.org/papers/hughes_civr_2003.pdf • Wildemuth, B. M., Marchionini, G., Wilkens, T., Yang, M., Geisler, G., Fowler, B., Hughes, A., & Mu, X. (2002). Alternative surrogates for video objects in a digital library: Users’ perspectives on their relative usability. Proceedings of the 6th European Conference on Digital Libraries, September 16 - 18, 2002, Rome, Italy. http://www.open-video.org/papers/ECDL2002.020620.pdf • Wildemuth, B. M., Marchionini, G., Yang, M., Geisler, G., Wilkens, T., Hughes, A., & Gruss, R. (2003). How fast is too fast? Evaluating fast forward surrogates for digital video. Proceedings of the 3rd ACM/IEEE-CS Joint Conference on Digital Libraries (JCDL 2003), pp. 221-230. http://www.open-video.org/papers/p221-wildemuth.pdf • Wilkens, T., Hughes, A., Wildemuth, B. M., & Marchionini, G. (2003). The role of narrative in understanding digital video: An exploratory analysis. Proceedings of the Annual Meeting of the American Society for Information Science & Technology, 40, 323-329. http://www.open-video.org/papers/Wilkens_Asist_2003.pdf • Yang, M., & Marchionini, G. (2005). Deciphering visual gist and its implications for video retrieval and interface design. Conference on Human Factors in Computing Systems (CHI) (Portland, OR. Apr. 2-7, 2005), 1877-1880. http://www.open-video.org/papers/MengYang_050205_CHI.pdf • Yang, M., Wildemuth, B. M., Marchionini, G., Wilkens, T., Geisler, G., Hughes, A., Gruss, R., & Webster, C. (2003). Measures of user performance in video retrieval research. UNC School of Information and Library Science (SILS) Technical Report TR-2003-02. http://sils.unc.edu/research/publications/reports/TR-2003-02.pdf

More Related