1 / 35

Scenario based Dynamic Video Abstractions using Graph Matching

Scenario based Dynamic Video Abstractions using Graph Matching. Jeongkyu Lee University of Bridgeport. Outline. Introduction Graph-based Video Segmentation Multi-level Video Scenarios Dynamic Video Abstractions Experiments Conclusion and Future work. Introduction. Introduction

alvis
Download Presentation

Scenario based Dynamic Video Abstractions using Graph Matching

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Scenario based Dynamic Video Abstractions using Graph Matching Jeongkyu Lee University of Bridgeport

  2. Outline • Introduction • Graph-based Video Segmentation • Multi-level Video Scenarios • Dynamic Video Abstractions • Experiments • Conclusion and Future work

  3. Introduction • Introduction • Video Abstractions • Limitations • Automatic Video Analysis System • Graph-based Video Segmentation • Multi-level Video Scenarios • Dynamic Video Abstractions • Experiments • Conclusion and Future work

  4. Video Abstractions • Static video summary • A collection of static images of video segments • Nonlinear browsing of video • Dynamic video skimming • A shorter version of video arranged in time • Preserving the time-evolving element of video

  5. Limitations • Static video summary • Loss of semantic contents of video, since it sacrificing its time-evolving element. • Dynamic video skimming • Very subjective • Gap between human cognition and automated results

  6. Automatic Video Analysis System • Provides both static video summary and dynamic video skimming • Video segmentation by representing frames as graphs and matching them • Constructing a scene tree to illustrate hierarchical content of video • Generating multi-level scenarios using the scene tree • Multi-level highlights and multi-length summaries using the scenarios

  7. AVAS

  8. Contributions • We propose a graph based shot boundary detection (SBD) and a graph similarity measure (GSM) to capture both the spatial and temporal relationships among video frames. • We generate multi-level scenarios of a video by accessing a scene tree on different levels to provide various levels of video abstraction. • We propose dynamic video abstractions which are able to generate both static video summary (i.e., multi-length summarizations) and dynamic video skimming (i.e., multi-level video highlights).

  9. Graph-based Video Segmentation • Introduction • Graph-based Video Segmentation • Region Adjacency Graph (RAG) • Graph Similarity Measure (GSM) • Shot Boundary Detection • Multi-level Video Scenarios • Dynamic Video Abstractions • Experiments • Conclusion and Future work

  10. RAG • Region segmentation using EDISON (Edge Detection and Image Segmentation System) • Region Adjacency Graph (RAG)

  11. Example of RAG

  12. Neighborhood Graphs • Compare two RAGs to find Similarity between them • Decompose RAG into Neighborhood Graphs • Similarity between two neighborhood graphs

  13. Maximal Common Subgraph • In order to find the largest common subgraph (GC), we first construct the association graph that is formed by creating nodes from each compatible pair of two nodes • We obtain GC by finding the maximal clique in the association graph • We call this algorithm as Maximal Common Subgraph, that is based on recursive programming

  14. Maximal Common Subgraph • To find GC, use Maximal Common Subgraph

  15. GSM • Graph Similarity Measure (GSM) • Simplified GSM by reducing search area

  16. Shot Boundary Detection • Abrupt change: If GSMsim is more than a certain threshold value (Tcut), the two frames corresponding to the two RAGs are considered to be in the same shot. • Gradual change: Starting and ending frames can be found by tracking the low and continuous values of GSMsim

  17. Shot Boundary Detection

  18. Multi-level Video Scenarios • Introduction • Graph-based Video Segmentation • Multi-level Video Scenarios • Scene Tree Construction • Multi-Level Scenario Selection • Dynamic Video Abstractions • Experiments • Conclusion and Future work

  19. Scene Tree Construction • Create a scene node for each shot • Check a current shot is related to the previous shots using Corr • For two correlated scene nodes SNi and SNj • If SNi and SNj-1 do not have parent nodes -> create new parent node • If SNi and SNj-1 share the same ancestor node -> connect SNito the ancestor node • If SNi and SNj-1 do not share any ancestor node -> connect SNito the oldest ancestor node of SNi-1 • If there are more shots, go to step 2 • Determine key RAG for each node using

  20. Multi-level Scenario Selection

  21. Dynamic Video Abstraction • Introduction • Graph-based Video Segmentation • Multi-level Video Scenarios • Dynamic Video Abstractions • Multi-level Video Highlights • Multi-length Video Summarization • Experiments • Conclusion and Future work

  22. Multi-Level Video Highlights • Let L be a summary level of V selected by a user • Pick the shots corresponding to the scene nodes from a scenario with level L • Concatenate the selected shots to make a highlight video

  23. Multi-Length Video Summarization • Let L and T be a summary level and a length • For each scene node in a scenario with level L, select a key RAG • For each selected key RAG, find its relevant frames using GSM • Concatenate the selected frames

  24. Experimental results • Introduction • Graph-based Video Segmentation • Multi-level Video Scenarios • Dynamic Video Abstractions • Experiments • Data set • Efficiency of GSM • Performance of SBD • Evaluation of Video Abstraction • Conclusion and Future work

  25. Data set • AVI format with 15 frames per second and 160 x 120 pixel resolution.

  26. Efficiency of GSM

  27. Performance of SBD

  28. Evaluation of Video Abstractions

  29. Evaluation of Video Abstractions • To assess the performance of the video summarization, we ask two questions to five reviewers. • They assign two scores ranging from 0 to 10 to each video. • A score of 0 means `Poor' while a score of 10 means `Excellent'. • To be fair, the reviewers are given a chance to modify the score any time during the review process. • The following Table shows the results of the performance evaluation of the video summarization from five reviewers. • Each parenthesis value is the standard deviation of assigned scores at a certain level.

  30. Evaluation of Video Abstractions • Informativeness: How much of the content of the video clip do you understand? • Satisfaction: How satisfactory is the summarized video compared to the original?

  31. Evaluation of Video Abstractions • For example, 80 % of the original video is reduced by the summarization at the bottom level. • However, the information of video content drops only around 15 %, and the user satisfaction is around 85 % for the same level summarization. • Even, the summary at the medium level keeps around 70 % of information as well as 70 % of user satisfaction, while the original video is compressed to 17.6 %.

  32. AVAS

  33. Sample Result

  34. Conclusions • Introduction • Graph-based Video Segmentation • Multi-level Video Scenarios • Dynamic Video Abstractions • Experiments • Conclusions • Conclusions

  35. Conclusions • We propose a graph-based shot boundary detection and graph-based similarity measure (GSM) • We generate multi-level scenarios of a video by accessing a scene tree on different levels • We propose dynamic video abstractions that are able to generate both static video summary and dynamic video skimming.

More Related