w p 5 4 introduction n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
W P 5.4 - Introduction PowerPoint Presentation
Download Presentation
W P 5.4 - Introduction

Loading in 2 Seconds...

play fullscreen
1 / 15

W P 5.4 - Introduction - PowerPoint PPT Presentation


  • 69 Views
  • Uploaded on

W P 5.4 - Introduction. Knowledge Extraction from Complementary Sources This activity is concerned with augmenting the semantic multimedia metadata basis by analysis of complementary textual, speech and semi-structured data Focus in first 12 months

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'W P 5.4 - Introduction' - kyran


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
w p 5 4 introduction
WP5.4 - Introduction
  • Knowledge Extraction from Complementary Sources
    • This activity is concerned with augmenting the semantic multimedia metadata basis by analysis of complementary textual, speech and semi-structured data
    • Focus in first 12 months
      • Joint work between DFKI, UEP and DCU on aligning event extraction from textual football match reports with event recognition in video coverage of the same match
    • Focus in following 12 months
      • Joint work between DFKI, UEP and DCU on the extension of the event alignment work towards cross-media feature extraction (aligning low-level image/video features with events extracted in aligned textual and semi-structured data)
      • Joint work between DFKI, UEP, TUB and GET (cross-WP cooperation with WP3.3) on analyzing textual metadata in primary sources (OCR applied to text detected in images).
text video mapping in the football domain
Text-Video Mapping in the Football Domain

Alignment of extracted events from unstructured textual data and from events that are provided by the semi-structured tabular data in the SmartWeb corpus (DFKI) with events that were detected by the video analysis results (DCU).

  • Cooperation: DFKI, UEP, DCU
  • Resources:
    • DFKI: SmartWeb Data Set (textual and tabular match reports)
    • DFKI/UEP: Additional minute-by-minute textual match reports (‚tickers‘) from other web resources
    • DCU: Video Detectors (Crowd image detector, Speech-Band Audio Activity, On-Screen Graphics Tracking, Motion activity measure, Field Line orientation, Close-up)
  • Textual and semi-structured data (tabular, XML files) are exploited as background knowledge in filtering the video analysis results and will possibly help in further improving the corresponding video analysis algorithms
resources
Resources
  • The SmartWeb Data Set as provided by DFKI is an experimental data set for ontology-based information extraction and ontology learning from text that has been compiled for the SmartWeb project.
  • The data set consists of:
    • An ontology on football (soccer) that is integrated with foundational (DOLCE), general (SUMO) and task-specific (discourse, navigation) ontologies.
    • A corpus of semi-structured and textual match reports (German and English documents) that are derived from freely available web sources. The bilingual documents are not translations, but are aligned on the level of a particular match (i.e. they are about the same match).
    • A knowledge base of events and entities in the world cup domain that have been automatically extracted from the German documents.
  • For the purposes of the experiment described here we were mostly interested in the events that are described by the semi-structured data.
dcu video analysis data
DCU: Video Analysis Data
  • Framework for event detection in broadcast video of multiple different field sports as provided by DCU
  • Video detectors used by DCU
    • Crowd image detector
    • Speech-Band Audio Activity
    • On-Screen Graphics Tracking
    • Motion activity measure
    • Field Line orientation
    • Close-up
slide6

crowd

confidence

audio

Visual_motion

dfki uep extraction of tickers
DFKI/UEP: Extraction of Tickers

Minute-by-minute reports from different Web resources

Ligalive.de

Ard.de

bild.de

information extraction from text
Information Extraction from Text

Information Extraction with DFKI Tool „SProUT“

Shallow Processing with Unification and Typed Feature Structures(SProUT) tool for multilingual shallow text processing and information extraction

SProUT java web service that takes the minute-by-minute reports as an input, parses them and extracts a new XML file for each minute of a particular match

aligning and aggregation of textual events
Aligning and Aggregation ofTextual Events

Events alignment from various tickers

Information Extraction Results (SProUT)

alignment

Data aggregation for later use

Example: minute 40

Tabular Reports

VIDEO – TEXTUAL DATA TIME ALIGNMENT

Minute-by-minute reports

CROSS-MEDIA FEATURE EXTRACTION

+ video event detection data (features) from DCU

match vs video time
Match vs Video Time

Freekick evaluation

Possible OCR on video

Time differences tracking

slide11

Cross-media Features

  • Purpose: Cross-Media features describe information that occurs in textual/semi-structured data as well as in video data and can therefore be used as additional support in video analysis.
  • Goal: Use video detectors aligned with events extracted from text/semi-structured data as cross—media features
  • Example:
summary
Summary
  • Extracted: 1200 events, 45 event-types
  • After alignment: 850 events describing five matches from World Cup 2006 Final
  • 170 events per game on average
  • Cross-media descriptors for every event-type
future plans
Future plans

In WP5.4.1 continue work on mapping between results of video analysis and complementary resource analysis in the following way:

Use extracted image descriptors from training data (video + aligned text extraction) for the classification of fine-grained events in test data (i.e. other videos) -- all based on minute-by-minute alignment

Cooperate with TUB in Video OCR to help time video-text alignment

WP5.4.2 Images and text as mutually complementary resources

WP5.4.3: Image retrieval based on enhanced query processing and complementary resource analysis

mining over football match data seeking associations among explicit and implicit events
Mining over Football Match Data: Seeking Associations among Explicit and Implicit Events
  • Apart from identifying individual events, it might be useful to find out about general statistical dependencies (associations) among types of events
  • Initial experiments carried out on a single type of resource – structured data
  • In the future, events extracted from text and video could be considered as well
  • Use of LISp-Miner tool (UEP)
    • Data mining procedure 4ft-Miner mines for various types of association rules and conditional association rules
  • Potential application:Discovering new relationships to be inserted into the domain ontology or knowledge base,