exposing and quantifying narrative and thematic structures in well formed and ill formed text n.
Skip this Video
Loading SlideShow in 5 Seconds..
Exposing and Quantifying Narrative and Thematic Structures in Well-formed and Ill-formed Text PowerPoint Presentation
Download Presentation
Exposing and Quantifying Narrative and Thematic Structures in Well-formed and Ill-formed Text

Loading in 2 Seconds...

play fullscreen
1 / 53

Exposing and Quantifying Narrative and Thematic Structures in Well-formed and Ill-formed Text - PowerPoint PPT Presentation

  • Uploaded on

Exposing and Quantifying Narrative and Thematic Structures in Well-formed and Ill-formed Text. Dr Dale Chant, Red Centre Software Pty Ltd. ASC Conference: Making Sense of New Research Technologies Critical Reflections on Methodology and Technology:

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Exposing and Quantifying Narrative and Thematic Structures in Well-formed and Ill-formed Text' - tirzah

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
exposing and quantifying narrative and thematic structures in well formed and ill formed text

Exposing and Quantifying Narrative and Thematic Structures in Well-formed and Ill-formed Text

Dr Dale Chant, Red Centre Software Pty Ltd

ASC Conference: Making Sense of New Research Technologies

Critical Reflections on Methodology and Technology:

Gamification, Text Analysis, and Data Visualisation

Friday 6th and Saturday 7th September 2013, University of Winchester

the problem
The Problem
  • Coding open-ended verbatims takes a long time
  • Inconsistent coding judgements can wreak havoc on small weekly samples
  • Some bodies of free text, such as Twitter feeds, are beyond human capacity to digest due to sheer volume
  • Machine coding by string matching assumes well-formed text – variant morphologies difficult to accommodate
na ve auto coding
Naïve Auto-Coding
  • Read all source words (or complete strings) into an array
  • Sort alphabetically
  • Assign codes from 1 to N, where N is the number of unique words (or unique strings)
  • Write the assigned codes in the original word (or string) order
na ve auto coding1
Naïve Auto-Coding
  • Well-formed published text is code-complete

The first line of Wuthering Heights

The complete code frame has 9,201 items

netting to a theme
Netting to a Theme
  • With the code frame defined, themes can be netted from individual words

abandonment = abandon/abandoned/abandonment/reject/rejected/rejecting

Coded Decoded

Theme(1) = Text(3/5,6530/6532)

code incomplete
Code Incomplete
  • Open-ended tracker Brand Awareness questions, time-dependent blog or social media exchanges
  • Can never be code-complete, because forthcoming data may throw up unanticipated variations


damerau levenshtein
  • One approach is Approximate String Matching
  • Match a source string to a target string by combinations of i) Insert ii) Delete iii) Replace iv) Transpose
  • The edit distance is the number of transforms needed to get from the source to the target
the algorithm in action
The Algorithm in Action
  • There is an interactive implementation of Damerau-Levenshtein at


scaling the algorithm 1
Scaling the Algorithm (1)
  • To be useful, the allowable distance for a positive match needs to scale against the length of the target strings
  • ‘ox’ to ‘fox’ has distance 1 (insert at head)This would be a false positive
  • ‘megalomania’ to ‘megalomaniacs’ has distance 2 (insert twice at tail)This is a good match
scaling the algorithm 2
Scaling the Algorithm (2)
  • Short strings need a distance of zero
  • Intermediate strings need 1 or 2
  • Longer strings can bear 2 or 3 or more
  • The thresholds for short/intermediate/long and allowed distances for a positive match are here termed the fuzz parameters
  • Fuzz parameters are determined empirically, and will vary with the body of text being analysed.
what is gained
What is Gained

The target string megalomaniac at an edit distance of 1 will match on:12 * 26 in situ typos (negalomaniac) + 12 missing (megaomaniac) + 12*26 extraneous (megaloomaniac) + 11 transpositions (meglaomaniac) + 2*26 extra pre/post character (mmegalomaniac)

= 699 possible variations

the procedure
The Procedure
  • Code the source text, one code per unique word
  • Run a sorted frequency count to exposerecurrent themes and concepts
  • Review actual instances of these words in situ to determine appropriate fuzz parameters and the thematic and conceptual contexts
  • Devise a compact target code frame which maps the themes and concepts words of interest to synonym and variant lists
  • Process the source text against the targets, to create a categorical variable which can be tabulated in the normal manner against any other variable
exposure and quantification romeo and juliet
Exposure and Quantification: Romeo and Juliet
  • Since this text is bounded, code-complete and well-formed, the fuzz parameters can all be set to zero
  • The Exposure step reveals dominance for i) love and related

ii) misery and despair iii) conflict and death

love dominates then diminishes
Love dominates, then diminishes

Romeo Romeo, wherefore art thou Romeo?

interlaced with conflict and death
Interlaced with Conflict and Death

Near mathematical symmetry:

ill formed text
Ill-formed Text
  • Tweets on Australian Federal Politics
  • From 1 June 2013 to 31 July 2013
  • Search term: #auspol OR #auspoll OR #ausvotes OR #ozcot
  • 927,190 cases
  • Average between 10 to 20,000 per day
  • Huge spike on 26 June


data sources
Data Sources
  • Two commercial data source providers were used: Gnip and ScraperWiki
  • The Gnip data was collected in a single 28 hour run conducted on 15 Aug 2013
  • ScraperWiki provides user-initiated searches for up to the prior seven days
  • Because ScraperWiki is near real time, accounts later banned or suspended by 15th August, and hence not in the Gnip data, remain present
  • The ScraperWiki data is used below only to demonstrate this point.



australian federal politics since 2007
Australian Federal Politics since 2007

Abbott, Conservative

Howard, Conservative

Rudd, Labor

Gillard, Labor


‘07 ‘08 ‘09 ‘10 ‘11 ‘12 ‘13

Rudd challenges and defeats Gillard, calls election for 7 Sept

Gillard challenges and defeats Rudd, calls election, hung parliament

Rudd defeats Howard at general election

the grand narrative
The Grand Narrative
  • With the Pretender to the Throne (Gillard) summarily dispatched
  • The True and Rightful King (Rudd), triumphantly returned from (backbench) exile
  • Now faces the Great Adversary (Abbott) in a battle to the (political) death for control of the realm

Warning: Aussie Vernacular Alert

Assange Senate Bid

distorting the message 1
Distorting the Message (1)

Attack of the TweetBots

distorting the message 2
Distorting the Message (2)

The Scheduled Automatons

  • Much more than just a metatag
  • Function as a message tokens too:
    • Commentary on current affairs (#1000BoatDeaths, #20000JobCuts)
    • Calls to action (#2013electiondateplease, #AbolishParliament)
    • Political attack (#AbbotLies)
    • Take a position (#AgeOfEntitlement)
    • Make a joke or pun (#calmdownbirdie, #fraudband)
hashtag spawn
Hashtag Spawn

Quantification should capture as many variants as possible.

sort on 19 july zoom
Sort on 19 July, Zoom

The PNG Solution is more punitive than anything the Conservatives have tried

but many instances missed
But many instances missed

Three dominant tags are clear, but the variants will be lost under a search on just asylum OR asylumseeker/s

Ditto Refugee/Refugees, etc.

  • Smoothed percentage chart of all instances exposes the narratives, but to quantify them accurately, we cannot forego counting the variants.
  • To get a more precise read, we apply Damerau-Levenshtein.
  • Recalling the four transformation rules (insert, delete, replace, transpose), the following matches (among many others) will be made to the dominant forms at run time:
    • battelrort->battlerort (transpose once)
    • calmdownbirdie->calmdownbridie (transpose once)
    • asylumseeke ->asylumseekers (insert twice)
    • asylumseekeers->asylumseekers (delete once)
    • asylymseekers->asylumseekers (replace once)
prepare the synonym variants lists for the dominant tags
Prepare the synonym/variants lists for the dominant tags
  • The procedure is:
  • Code the hashtags, one code per unique tag
  • Generate a sorted frequency count table
  • Choose a cut-off point - I have used 30
  • Review all items > 30, define and initialise a coded synonym/variants list with the dominant tags
  • Sort the table alphabetically by label
  • Review label blocks for any variants which are too coarse for Damerau-Levenshtein, and add to the relevant synonym/variants target list
confirm it works
Confirm it Works
  • Set fuzz parameters as distance=0 for strings 4 characters or less, distance=1 for 9 characters or less, and distance=2 for 10 characters or more
  • Run the source hashtags against these targets to create a new variable comprising eight categorical codes
  • To confirm, run a table of the eight coded categories against the original raw hashtag text

The source strings battelrort and calmdownbirdie are both correctly captured and coded.

share of voice
Share of Voice

All synonym and variant matches for Rudd, Abbott, Gillard, as percentages of the sum of their total mentions per day:

compared to topsy sentiment score
Compared to Topsy Sentiment Score


Not much agreement here.

Who is right?

  • Machine: Standard business Dell laptop, dual core, 4 gig RAM, nothing fancy, no accelerations
  • The bottleneck is the Damerau-Levenshtein step on the tweet text, which for the above 46 categories over 113 meg of plain text, takes about 15 hours
  • Performance is linear to the number of individual target synonyms/variants
  • Damerau-Levenshtein on the hashtags, a much smaller set of targets, completes in about 20 minutes
  • The major time commitment from a human is in devising the target synonym and variants lists, here several hours
  • For more routine applications of the technique, such as open-ended brand lists, preparing the target lists is trivial

[end of document]

the finale
The Finale


and throughout the land of oz the loud lament was raised
And Throughout the land of Oz, the Loud Lament was Raised:


datdat rebirthing dada for the digital milieu
DatDat: Rebirthing Dada for the Digital Milieu

Tentative: 5.30 Tuesday Goldsmiths. Email dale@redcentresoftware.com to confirm.