1 / 15

Content Augmentation for Mixed-Mode News Broadcasts gate.ac.uk/ nlp.shef.ac.uk/

Content Augmentation for Mixed-Mode News Broadcasts http://gate.ac.uk/ http://nlp.shef.ac.uk/ Mike Dowman Valentin Tablan Hamish Cunningham University of Sheffield Borislav Popov Ontotext Lab, Sirma AI. New Broadcasts are often augmented with textual content.

gerry
Download Presentation

Content Augmentation for Mixed-Mode News Broadcasts gate.ac.uk/ nlp.shef.ac.uk/

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Content Augmentation for Mixed-Mode News Broadcasts http://gate.ac.uk/http://nlp.shef.ac.uk/ Mike Dowman Valentin Tablan Hamish Cunningham University of Sheffield Borislav Popov Ontotext Lab, Sirma AI

  2. New Broadcasts are often augmented with textual content. Usually only limited content is available. The TV company controls content production. Rich News finds content automatically. Developed on BBC news. Recordings of broadcasts go in one end. Relevant news web pages are associated with the stories in the broadcasts fully automatically. Augmented Television News 2

  3. Semantic Indexing of News Systems already exist that can index news broadcasts in terms of ‘named entities’ that they refer to. • E.g. Mark Maybury’s Broadcast News Navigator. • Entities such as cities, people, organizations are marked as such. Rich News can improve annotation: • Annotation is in terms of an ontology. • Uses Automatic Speech Recognition, so can be applied when no subtitles are available. • Web pages are used to help find named entities. 3

  4. Obtaining a transcript: Speech recognition produces poor quality transcripts with many mistakes. Broadcast segmentation: A news broadcast contains several stories. How do we work out where one starts and another one stops? Story summarization and classification: How can we make a summary or headline for each story from a poor quality transcript? How can we work out what kind of news it reports? Key Problems 4

  5. Media File THISL Speech Recogniser Story index documents with textual descriptions, semantic annotations and a link to a part of the original media file C99 Topical Segmenter TF.IDF Key Phrase Extraction KIM Information Extraction Web-Search and Document Matching Index Document Creation Manual Annotation (Optional) Rich News Architecture 5

  6. ASR is performed by the THISL system. Based on ABBOT connectionist speech recognizer. Optimized specifically for use on BBC news broadcasts. Average word error rate of 29%. Error rate of up to 90% for out of studio recordings. Using ASR Transcripts 6

  7. Uses C99 segmenter: Removes common words from the ASR transcripts. Stems the other words to get their roots. Then looks to see in which parts of the transcripts the same words tend to occur. These parts will probably report the same story. Topical Segmentation 7

  8. Term frequency inverse document frequency (TF.IDF): Chooses sequences of words that tend to occur more frequently in the story than they do in the language as a whole. Any sequence of up to three words can be a phrase. Up to four phrases extracted per story. Key Phrase Extraction 8

  9. The Key-phrases are used to search on the BBC, and the Times, Guardian and Telegraph newspaper websites for web pages reporting each story in the broadcast. Searches are restricted to the day of broadcast, or the day after. Searches are repeated using different combinations of the extracted key-phrases. The text of the returned web pages is compared to the text of the transcript to find matching stories. Web Search 9

  10. Evaluation Success in finding matching web pages was investigated. • Evaluation based on 66 news stories from 9 half-hour news broadcasts. • Web pages were found for 40% of stories. • 7% of pages reported a closely related story, instead of that in the broadcast. Results are based on earlier version of the system, only using BBC web pages. 10

  11. Web pages can be made available to the viewer as additional content. The web pages contain: A headline, summary and section for each story. High quality text that is readable, and contains correctly spelt proper names. They give more in depth coverage of the stories. Web pages could be included in the broadcast by the TV company. Or discovered by a device in viewers’ homes. Using the Web Pages 11

  12. The KIM knowledge management system can semantically annotate the text derived from the web pages: KIM will identify people, organizations, locations etc. KIM performs well on the web page text, but very poorly when run on the transcripts directly. This allows for semantic ontology-aided searches for stories about particular people or locations etcetera. So we could search for people called Sydney, which would be difficult with a text-based search. Semantic Annotation 12

  13. Search for Entities 13

  14. Story Retrieval 14

  15. Use teletext subtitles (closed captions) when they are available. Better story segmentation through visual cues and latent semantic analysis. Integrating selected stories from multiple news broadcasts  custom news services. Find out when programmes covering stories in more depth will be broadcast, and make this information available to the viewer. Future Improvements 15

More Related