1 / 25

THE CURRENT LANDSCAPE:

THE CURRENT LANDSCAPE:. Media Streams- Video annotation and editing system Designed by Marc Davis, Brian Williams, and Golan Levin 1991-1997 Machine Understanding Group of the MIT Media Laboratory and Interval Research Corporation. THE CURRENT LANDSCAPE:.

shania
Download Presentation

THE CURRENT LANDSCAPE:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. THE CURRENT LANDSCAPE: Media Streams- Video annotation and editing system Designed by Marc Davis, Brian Williams, and Golan Levin 1991-1997 Machine Understanding Group of the MIT Media Laboratory and Interval Research Corporation Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

  2. THE CURRENT LANDSCAPE: Media Streams is a system for annotating, retrieving, repurposing, and automatically assembling digital video. Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

  3. WHAT ARE THEY SOLVING? Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

  4. WHAT ARE THEY SOLVING? The problem of finding video information in a large and growing archive- examining how to annotate and describe video data in a way that is comprehensible by all people and is searchable by computer. Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

  5. WHAT ARE THEY SOLVING? Their intent is to give access to video materials to users who want to repurpose or recompose them. Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

  6. WHAT ARE THEY SOLVING? Current annotation is limited by the language of key word choice, resulting in a missed search opportunity. Often, key words are not specific enough for searching in video. Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

  7. WHAT ARE THEY SOLVING? Current annotation has no universal guidelines. Annotation “language” is personal and varies from person to person. Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

  8. HOW ARE THEY SOLVING IT? Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

  9. HOW ARE THEY SOLVING IT? Video-annotation software that allows multiple annotations of the same clip, including varying levels of overlap of clip annotations. Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

  10. HOW ARE THEY SOLVING IT? To make search and annotation more reliable, Media Streams uses a system of visual icons which represent what is depicted in the video clip. It can both read and write computer-generated icons. Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

  11. “…Media Timeline, on which iconic annotations of video are temporally indexed. Each stream in the Media Timeline contains annotations about a unique aspect of video content, such as settings, characters, objects, actions, camera motions, etc.” Golan Levin, Principal Designer of Icon Visual Language Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

  12. HOW ARE THEY SOLVING IT? Users select an individual icon or combination of icons (compound) to annotate a clip. Icons represent what is visually depicted in a scene, not the meaning of a scene. Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

  13. “…Icon Space, an atemporal, hierarchically-indexed "dictionary" of iconic descriptors. The Icon Space incorporates utilities for icon construction and search.” Golan Levin, Principal Designer of Icon Visual Language Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

  14. HOW ARE THEY SOLVING IT? Searching the archive affords setting parameters of what you want to appear in the clip. MS locates existing footage representing that description. It can then recompose existing shots to create a clip meeting the desired parameters. Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

  15. “…cross-section of the icon hierarchies, including: Historic Period, Calendar Time, Time of Day, Functional Building Space, Topological Relationships, …. Character Body Types, Occupations, Tools, Food, Animals, Weather, and a variety of other objects and cinematographic relationships..” Golan Levin, Principal Designer of Icon Visual Language Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

  16. WHAT KEY ISSUES WERE FOUND? Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

  17. WHAT KEY ISSUES WERE FOUND? Syntax and Semantics-the meaning of video information is constructed from its relationship to the shots surrounding it. Annotating by physical description is effective, annotating by more complex meaning does not hold up. Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

  18. HOW DOES IT RELATE? Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

  19. HOW DOES IT RELATE? Although Media Streams is solving a different problem than ours, the issue of examining and creating meaning from video texts is significant. Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

  20. HOW DOES IT RELATE? Media Timeline visually displays an entire clip. This is an interesting model for visualizing a film. Users can quickly visually scan to find a relevant point, instead of remembering time code. Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

  21. HOW DOES IT RELATE? Media Timeline makes viewable the context of single clips within an entire work. Users can mark clips while recognizing and understanding their syntax. Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

  22. Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

  23. HOW DOES IT RELATE? Icons propose an interesting way of quickly flagging types of clips. Could add meaning to the process marking clips. They could also be useful for some analytical tasks such as identifying elements of film grammar. Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

  24. REFERENCES: Davis, Marc. "Media Streams: An Iconic Visual Language for Video Representation." In: Readings in Human-Computer Interaction: Toward the Year 2000, ed. Ronald M. Baecker, Jonathan Grudin, William A. S. Buxton, and Saul Greenberg. 854-866. 2nd ed., San Francisco: Morgan Kaufmann Publishers, Inc., 1995. http://acg.media.mit.edu/people/golan/mediastreams/ Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

  25. FIN Eric Bailey, Peter Worth ED 229 C Seminar in Learning, Design and Technology Stanford University School of Education January, 21 2004

More Related