1 / 62

An Introduction to DUC-2003 Intrinsic Evaluation of Generic News Text Summarization Systems

An Introduction to DUC-2003 Intrinsic Evaluation of Generic News Text Summarization Systems. Paul Over Retrieval Group Information Access Division James Yen Statistical Modeling and Analysis Group

dillian
Download Presentation

An Introduction to DUC-2003 Intrinsic Evaluation of Generic News Text Summarization Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Introduction to DUC-2003Intrinsic Evaluation of Generic News Text Summarization Systems Paul Over Retrieval Group Information Access Division James Yen Statistical Modeling and Analysis Group Statistical Engineering Division National Institute of Standards and Technology Sponsored by DARPA and ARDA

  2. Document Understanding Conferences (DUC)… • Summarization has always been a TIDES component • An evaluation roadmap created in 2000 after spring TIDES PI meeting • Specifies a series of annual cycles • Year 1 (DUC-2001 at SIGIR in September 2001) • Intrinsic evaluation of generic summaries, • of newswire/paper stories • for single and multiple documents; • with fixed target lengths of 50, 100, 200, and 400 words • 60 sets of 10 documents used • 30 for training • 30 for test

  3. … Document Understanding Conferences (DUC) • Year 2 – short cycle – (DUC-2002 at ACL ’02 in July 2002) • Intrinsic evaluation of generic summaries, • of newswire/paper stories • for single and multiple documents • Abstracts of single documents and document sets • fixed lengths of 10, 50, 100, and 200 words • manual evaluation using SEE software at NIST • Extracts of document sets • fixed target lengths of 200 and 400 words • automatic evaluation at NIST and by participants • 60 sets of ~10 documents each • All for test • No new training data • Two abstracts/extracts per document (set)

  4. Goals of the talk • Provide an overview of DUC 2003: • Data: documents, topics, viewpoints, manual summaries • Tasks: • 1: very short (~10-word) single document summaries • 2-4: short (~100-word) multi-document summaries with focus 2: TDT event topics 3: viewpoints 4: question/topic • Evaluation: procedures, measures • Experience with implementing the evaluation procedure • Introduce the results (what happened): • Basics of system performance on the measures • Sanity checking the results and measures • Exploration of various questions: • Performance of systems relative to baselines and humans • Relative performance among systems – significant differences?

  5. Data: Formation of test document sets • 30 TDT clusters (298 documents; ~352 sentences/docset) • 30 event topics and documents chosen by NIST • 15 from TDT2 • 15 from TDT3 • NIST chose a subset of the documents the TDT annotator decided were “on topic” • 30 TREC clusters (326 documents; ~335 sentences/docset) • Chosen by NIST assessors on topics of interest to them • No restrictions as to topic type • 30 TREC Novelty clusters (~66 relevant sentences/docset) • 30 Novelty topics picked by NIST (based on assessor agreement) • All (~25) Novelty track documents/cluster included • Relevant/novel sentences identified by Novelty assessors

  6. Manual abstract creation (x 4) TDT docs Task 1 Very short single-doc summaries Short multi-doc summary + TDT topic Task 2 TREC docs Very short single-doc summaries Viewpoint Short multi-doc summary + Task 3 Relevant/novel sentences Short multi-doc summary Task 4 + TREC Novelty topic Novelty docs

  7. Baseline summaries etc. • NIST (Nega Alemayehu) created baseline summaries • Baselines 2-5: automatic • based roughly on algorithms suggested by Daniel Marcu • no truncation of sentences, so some baseline summaries went over the limit (+ <=15 words) and some were shorter than required) • Original author’s headline 1 (task 1) • Use the document’s own “headline” element • Baseline 2 (tasks 2, 3) • Take the 1st 100 words in the most recent document. • Baseline 3 (tasks 2, 3) • Take the 1st sentence in the 1st, 2nd, 3rd,… document in chronological sequence until you have 100 words. • Baseline 4 (task 4) • Take the 1st 100 words from the 1st n relevant sentences in the 1st document in the set. ( Documents ordered by relevance ranking given with the topic.) • Baseline 5 (task 4) • Take the 1st relevant sentence from the 1st, 2nd, 3rd,… document until you have 100 words. (Documents ordered by relevance ranking given with the topic.)

  8. Submitted summaries by system and task SYSID Code T1 T2 T3 T4 Group ---------------------------- -- --- -- -- -- ------------------------ AMDS_HW.v1 6 - 30 - - Heriot-Watt University uam.duc2003.v6 7 624 - - - University of Madrid gistkey.duc03 8 624 - - - Federal U. of Sao Carlos bbn.umd.hedge 9 624 - - - BBN / U. of Maryland CL.Research.duc03 10 622 30 30 30 CL Research cslab.duc03 11 - 30 30 - NTT fudan.duc2003 12 - 30 - - Fudan University isiwebcl.duc2003.vcombined 13 624 30 30 30 ISI/USC aquaintandmultigenanddems 14 - 30 - 30 Columbia University ku.duc2003 15 624 30 30 - Korea University ccsnsa.duc03.v3 16 - 30 30 29 NSA+ UofLeth-DUC2003 17 624 30 30 30 University of Lethbridge kul.2003 18 624 30 30 - University of Leuven SumUMFAR 19 - 30 - 30 University of Montreal crl_nyu.duc03 20 - 30 30 30 New York University uottawa 21 624 30 30 - University of Ottowa lcc.duc03 22 624 30 30 30 LCC UofM-MEAD 23 - 30 30 30 University of Michigan UDQ 24 564 - - - University of Girona CLaC.DUCTape.Summarizer 25 624 - - - Concordia University saarland.2003 26 624 30 - - Univ. of the Saarland

  9. Evaluation basics • Content coverage and linguistic quality: • Intrinsic evaluation by humans using special rewritten version of SEE (thanks to Lei Ding and Chin-Yew Lin at ISI) • Compare: • a model summary - authored by a human • a peer summary - system-created, baseline, or additional manual • Produce judgments of: • Peer quality (12 questions) • Coverage of each model unit by the peer (recall) • Relevance of peer-only material • Usefulness (task 1) and Responsiveness (task 4): • Simulated extrinsic evaluations • Comparison together of all peer summaries for a given doc(set) • Assignment of each summary to one of 5 bins

  10. Models • Source: • Authored by a human • For 2003, the assessor is always the model’s author • Formatting: • Divided into model units (MUs) • (MUs == EDUs - thanks to Radu Soricut at ISI) • Lightly edited by authors to integrate uninterpretable fragments • George Bush’s selection of Dan Quale • as his running mate surprised many • many political observers thought him a lightweight with baggage • to carry • Flowed together with HTML tags for SEE

  11. Peers • Formatting: • Divided into peer units (PUs) – • simple automatically determined sentences • tuned slightly to documents and submissions • Abbreviations list • List of proper nouns • Flowed together with HTML tags for SEE • 4 Sources: • Author’s headline: 1 • Automatically generated by baseline algorithms: 2 – 5 • Automatically generated by research systems: 6 – 26 • Authored by a human other than the assessor: A – J

  12. SEE: overall peer quality

  13. Overall peer quality12 Questions developed with participants Answer categories: 0 1-5 6-10 >10 • About how many gross capitalization errors are there? • About how many sentences have incorrect word order? • About how many times does the subject fail to agree in number with the verb? • About how many of the sentences are missing important components (e.g. the subject, main verb, direct object, modifier) – causing the sentence to be ungrammatical, unclear, or misleading? • About many times are unrelated fragments joined into one sentence?

  14. Overall peer quality • About how many times are articles (a, an, the) missing or used incorrectly? • About how many pronouns are there whose antecedents are incorrect, unclear, missing, or come only later? • For about how many nouns is it impossible to determine clearly who or what they refer to? • About how times should a noun or noun phrase have been replaced with a pronoun? • About how many dangling conjunctions are there ("and", "however"...)? • About many instances of unnecessarily repeated information are there? • About how many sentences strike you as being in the wrong place because they indicate a strange time sequence, suggest a wrong cause-effect relationship, or just don't fit in topically with neighboring sentences?

  15. Overall peer qualitySystems > Baselines >= Manual Mean number of quality questions indicating one or more errors

  16. Overall peer qualityUneven distribution of non-zero scores by question 300 Task 2 Task 3 >10 1-5 6-10 Task 4 Question 1 2 3 4 5 6 7 8 9 10 11 12 Capitalization error Noun referent unknown Sentence out of place

  17. Overall peer qualityQ1: Capitalization

  18. Overall peer qualityQ1: Capitalization PARIS, February 20 (Xinhua) -- Declaring that "Currency is politics," French Prime Minister Alain Juppe today reiterated France's determination to realize the single European currency. LONDON, March 28 (Xinhua) -- British officials will fight suggestions that UK be forced to enter a new European exchange rate mechanism (ERM) after the proposed European single currency comes into force, it was reported here today. LONDON, April 4 (Xinhua) -- British Board of Trade president Ian Lang Wednesday warned that a single European currency could prove harmful to British business if adopted without full and careful consideration of possible consequences.

  19. Overall peer qualityQ8: Noun referent unknown

  20. Overall peer qualityQ8: Noun referent unknown The president indicate that he is willing to strip some of the anti-environmental he wrote that impact his state riders. That $18 billion on the International Monetary Fund spending bes a waste of money convince conservatives. Dick Armey R-Texas did not predict that the GOP presence in Congress would be even stronger next year when the deal might be reached. Republicans attach the president to deem to be anti-environment provisions. You know We 're that they are about a domestic thinking concerned. Everybody understand the IMF can have American tax dollars. The White House ever have that until mid-September.

  21. Overall peer qualityQ12: Misplaced sentences

  22. Overall peer qualityQ12: Misplaced sentence(s) All of these satellites came through Tuesday's meteor shower unscathed. Showers of Leonid meteors may produce hundreds or thousands of blazing meteors each hour. Some satellites in low-earth orbits can actually hide from meteoroid storms, Ozkul said. The scientists who track Temple-Tuttle do not even call it a shower, they call it a meteor storm. Satellite experts said that some damage might take days to detect, but that satellites generally seemed to have escaped disabling harm. This storm of meteors, called Leonid meteors because they come from the direction of constellation Leo, will be the first to hit the Earth since 1966 when the world's space programs were in their infancy, and its effects on satellite systems are uncertain.

  23. SEE: per-unit content

  24. Per-unit content: evaluation details • “First, find all the peer units which tell you at least some of what the current model unit tells you, i.e., peer units which express at least some of the same facts as the current model unit. When you find such a PU, click on it to mark it. • Requirement for common facts relaxed for very short summaries • Common references count • “When you have marked all such PUs for the current MU, then think about the whole set of marked PUs and answer the question:” • “The marked PUs, taken together, express about [ 0% 20% 40% 60% 80% 100% ]of the meaning expressed by the current model unit”

  25. Per-unit content: % MU-to-peer comparisons with no coverage • DUC 2002: • All - 62% • Manual – 42% • DUC 2001 • All - 63% • Appear to be due to real differences in content • Do the peers agree on which MUs are not covered?

  26. Per-unit content: Counts of MUs by number of PUs mapped to them T2 T1 T4 T3

  27. Per-unit content measures: – recall • What fraction of the model content is also expressed by peer? • Mean coverage: • average of the per-MU completeness judgments [0, 20, 40, 60, 80,100]% for a peer summary • Mean length-adjusted coverage (2002): • average of the per-MU length-adjusted coverage judgments for a peer • length-adjusted coverage = 2/3 * coverage + 1/3 * brevity where brevity = • 0 if actual summary length >= target length; else • (target size – actual size) / target size • Sets two goal: complete coverage and smallest possible summary • Perfect score only possible when BOTH goals reached • Truncate if target size exceeded

  28. Summary lengths (in words)by peer T1 T2 120 100 20 10 T3 T4 120 100 120 100

  29. Per-unit content measures: – recall • Task 1: Coverage • coverage • coverage with penalty iff over target length = coverage * target size / actual size • Post hoc substitute for lack of truncation • Tasks 2-4: Length-adjusted coverage (LAC) • improved coverage = 0  LAC = 0 • Improved, with penalty iff over target length = LAC * target size / actual size • proportional = coverage * target size / actual size

  30. Task 1: Very short summary of a single document • System task: • Use the 30 TDT clusters and the 30 TREC clusters • 734 documents; • ~12 documents/cluster • Given: • Each document • Create a very short summary • (~10 words, no specific format other than linear) of it. • Evaluation: • SEE • Coverage • Extra material • Usefulness

  31. Task 1: Mean coverage with penalty by peer 1.0 Manual mean 0.4 Author’s headline mean System mean

  32. Task 1: Mean coverage +/-penalty by peer With penalty Without 1.0 1.0 M M A A S S

  33. Task 1: ANOVA (mean coverage with penalty) Number of observations 9922 The GLM Procedure R-Square Coeff Var Root MSE Mean 0.297547 67.80859 0.208265 0.307137 Source DF Type I SS Mean Square F Value docset 59 42.1070990 0.7136796 16.45 peer 22 138.6796453 6.3036202 145.33 Source Pr > F docset <.0001 peer <.0001

  34. Task 1: Multiple comparisons (@ 0.05 confidence level) SAS REGWQ Grouping Mean N peer A 0.47981 624 1 B 0.40160 624 17 B C B 0.37788 624 26 C C 0.35801 624 18 D 0.31763 624 21 D D 0.30609 624 22 D D 0.30000 624 7 D D 0.29199 624 25 D E D 0.27468 624 9 E E 0.24744 624 13 E E 0.23511 564 24 F 0.16603 624 15 F F 0.15338 622 10 Mean N peer A 0.46712 624 1 B 0.37686 624 26 C 0.32009 624 17 C C 0.30272 624 21 D 0.26770 624 9 D E D 0.25560 624 18 E D E D F 0.24923 624 22 E D F E D F 0.24744 624 13 E F E F 0.22206 624 7 F F 0.21866 624 25 F F 0.21750 564 24 G 0.14949 622 10 G G 0.13825 624 15 Coverage Coverage with penalty Means with the same letter are not significantly different.

  35. Task 1: Usefulness • Simulated extrinsic evaluation • Assessor sees • each document • all summaries of that document • Assessor asked to: • “Assume the document is one you should read.” • “Grade each summary according to how useful you think it would be in getting you to choose the document: 0 (worst, of no use), 1, 2, 3, or 4 (best)” • Double assessment

  36. Task 1: Usefulness – Examples [Document NYT20000415.0068 text] 4 U D107.P.10.C.H.H.A.NYT20000415.0068 :: False convictions turn some conservatives against death penalty. 1 U D107.P.10.C.H.H.7.NYT20000415.0068 :: [death] their views seem incompatible; a number have raised; The columnist George Will wrote that skepticism. 4 U D107.P.10.C.H.H.1.NYT20000415.0068 :: LOOK WHO'S QUESTIONING THE DEATH PENALTY 3 U D107.P.10.C.H.H.J.NYT20000415.0068 :: Conservatives, death penalty, morality, DNA, justice, Will, Pat Robertson, Republican 0 U D107.P.10.C.H.H.9.NYT20000415.0068 :: ranks are admittedly small 4 U D107.P.10.C.H.H.B.NYT20000415.0068 :: Public softens on capital punishment; even conservatives questioning fairness, innocence 1 U D107.P.10.C.H.H.22.NYT20000415.0068 :: Their views seem incompatible with their political philosophy 1 U D107.P.10.C.H.H.15.NYT20000415.0068 :: That people have an incentive to be that the innocent are never to death by state action unborn or in jail whether they are put sure.

  37. Task 1: Usefulness by peer~95% confidence intervals around the mean

  38. Task 1: Scaled usefulness & coverage by peer Usefulness scaled Usefulness scaled Mean coverage Mean coverage with penalty

  39. Task 2: Short summary of document setfocused by a TDT event topic • System task: • Use the 30 TDT clusters • 298 documents • ~ 10 documents/cluster • ~ 352 sentences/cluster • Given: • each document cluster • the associated TDT topic • Create a short summary (~100 words) of the cluster. • Evaluation: • SEE: • 12 linguistic quality items • Content coverage • Extra material

  40. Task 2: Mean length-adjusted coverage with penalty by peer 0.5 M S B

  41. Task 2: Mean length-adjusted coverage +/- penalty by peer With penalty Without 0.5 0.5 M M S S B B

  42. Tasks 2 - 4: ANOVAs • Try ANOVA to see if baselines, manual, systems are significantly different from each other as groups. • ANOVA assumptions/checks: • Data approx. normally distributed with approx. equal variances • Residuals looked as if they could have come from the same normal distribution • Results: • Task 2: all groups significantly different • B != S; S != M; M != B • Task 3,4: can’t distinguish systems from baselines *Quadruple judgments

  43. Task 2: Multiple comparisons Mean N peer A 0.32790 30 22 A B A 0.28391 30 13 B A B A 0.27685 30 23 B A B A 0.27465 30 6 B A B A 0.27339 30 16 B A B A 0.27135 30 14 B A B A C 0.25117 30 20 B A C B D A C 0.23752 30 11 B D A C B D A C 0.23691 30 18 B D A C B D A C 0.23628 30 10 B D C B D E C 0.21547 30 12 B D E C B D E C 0.21422 30 26 B D E C B D E C 0.18898 30 21 D E C D E C 0.17561 30 3 D E F D E 0.15485 30 19 F D E F D E 0.14820 30 17 F E F E 0.13968 30 2 F F 0.08211 30 15 REGWQ Grouping Mean N peer A 0.18900 30 13 A B A 0.18243 30 6 B A B A 0.17923 30 16 B A B A 0.17787 30 22 B A B A 0.17557 30 23 B A B A 0.17467 30 14 B A B A C 0.16550 30 20 B A C B D A C 0.15193 30 18 B D A C B D A C 0.14903 30 11 B D A C B D A C 0.14520 30 10 B D A C B D E A C 0.14357 30 12 B D E A C B D E A C 0.14293 30 26 B D E C B D E C 0.12583 30 21 D E C D E C 0.11677 30 3 D E D E F 0.09960 30 19 D E F D E F 0.09837 30 17 E F E F 0.09057 30 2 F F 0.05523 30 15 Mean LAC with penalty Proportional

  44. Task 3: Short summary of document setfocused by a viewpoint statement • System task: • Use the 30 TREC clusters • 326 documents • ~ 11 documents/cluster • ~335 sentences/cluster • Given • each document cluster • a viewpoint description • create a short summary (~100 words) of the cluster from the point of view specified. • Evaluation: • SEE: • 12 linguistic quality items • Content coverage • Extra material

  45. Task 3: Mean length-adjusted coverage with penalty by peer 0.4 M S B

  46. Task 3: Multiple comparisons REGWQ Grouping Mean N peer A 0.12830 30 10 A A 0.12820 30 22 A B A 0.12330 30 20 B A B A 0.12250 30 18 B A B A 0.12063 30 16 B A B A 0.11517 30 11 B A B A 0.11223 30 23 B A B A 0.11063 30 17 B A B A 0.10137 30 3 B A B A 0.09850 30 21 B A B A 0.08477 30 13 B A B A 0.07900 30 2 B B 0.07127 30 15 Mean N peer A 0.13457 7 23 A A 0.13400 7 10 A A 0.11686 7 22 A A 0.10714 7 3 A A 0.10543 7 18 A A 0.09757 7 16 A A 0.09571 7 11 A A 0.09157 7 21 A A 0.08986 7 20 A A 0.08814 7 15 A A 0.07700 7 13 A A 0.07543 7 17 A A 0.04986 7 2 Mean LAC with penalty (full set) Mean LAC with penalty (subset)

  47. Task 4: Short summary of document set focused by a question • System task: • Use the 30 TREC Novelty track clusters • 734 documents • ~ 24 documents/cluster • ~ 66 relevant sentences/cluster • Given: • A document cluster • A question/topic • Set of sentences in each document that are relevant to the question • Create a short summary (~100 words) of the cluster that answers the question. Assessors were told to summarize the relevant sentences • Evaluation: • SEE: • 12 linguistic quality items • Content coverage • Extra material • Responsiveness

  48. Task 4*: Mean length-adjusted coverage with penalty by peer 0.6 M S B

  49. Task 4*: Mean length-adjusted coverage +/- penalty by peer With penalty Without 0.6 0.6 M M S S B B

  50. Task 4*: ANOVA • Try ANOVA to see if baselines, manual, systems are significantly different from each other as groups • Use quadruple judgment data to estimate effect of interactions • Model: coverage = grandmean + docset + peer + assessor + assessorXpeer + docsetXpeer + docsetXassessor + everything else

More Related