learning linguistic structure with simple recurrent networks n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Learning linguistic structure with simple recurrent networks PowerPoint Presentation
Download Presentation
Learning linguistic structure with simple recurrent networks

Loading in 2 Seconds...

play fullscreen
1 / 24

Learning linguistic structure with simple recurrent networks - PowerPoint PPT Presentation


  • 83 Views
  • Uploaded on

Learning linguistic structure with simple recurrent networks. February 20, 2013. Elman’s Simple Recurrent Network (Elman, 1990). What is the best way to represent time? Slots? Or time itself? What is the best way to represent language? Units and rules? Or connectionist learning?

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Learning linguistic structure with simple recurrent networks' - braith


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
elman s simple recurrent network elman 1990
Elman’s Simple Recurrent Network (Elman, 1990)
  • What is the best way to represent time?
    • Slots?
    • Or time itself?
  • What is the best way to represent language?
    • Units and rules?
    • Or connectionist learning?
  • Is grammar learnable?
  • If so, are there any necessary constraints?
the simple recurrent network
The Simple Recurrent Network
  • Network is trained on a stream of elements with sequential structure
  • At step n, target for output is next element.
  • Pattern on hidden units is copied back to the context units.
  • After learning the network comes to retain information about preceding elements of the string, allowing expectations to be conditioned by an indefinite window of prior context.
learning about words from streams of letters 200 sentences of 4 9 words
Learning about words from streams of letters (200 sentences of 4-9 words)

Similarly, SRNs have also been used to model learning to segment words in speech (e.g., Christiansen, Allen and Seidenberg, 1998)

learned and imputed hidden layer representations average vectors over all contexts
Learned and imputed hidden-layer representations (average vectors over all contexts)

‘Zog’ representationderived by averagingvectors obtained byinserting novel item in place of each occurrence of ‘man’.

analyis of srn s using simpler sequential structures servain schreiber cleeremans mcclelland
Analyis of SRN’s using Simpler Sequential Structures (Servain-Schreiber, Cleeremans, & McClelland)

The Grammar

The Network

slide9

Hidden unit representationswith 3 hidden units

True Finite State Machine

GradedState

Machine

training with restricted set of strings
Training with Restricted Set of Strings

21 of the 43 valid strings of length 3-8

progressive deepening of the network s sensitivity to prior context
Progressive Deepening of the Network’s Sensitivity to Prior Context

Note: Prior Context is only maintained if it is prediction-relevant at intermediate points.

nv agreement and verb successor prediction
NV Agreementand Verb successor prediction
  • Histograms show summed activation for classes of words:
    • W = who
    • S = period
    • V1/V2 / N1/N2/PNindicate singular, plural, or proper
    • For V’s:
      • N = No DO
      • O = Optional DO
      • R = Required DO
role of prediction relevance of head in carrying context across an imbedding
Role of Prediction Relevance of Head in Carrying Context Across an Imbedding
  • If the network at right is trained with symmetrical embedded strings, it does not reliably carry the prior context through the embedding (and thus fails to correctly predict the final letter esp. for longer embeddings).
  • If however subtle asymmetries on transitional probabilities are introduced (as shown) performance predicting the correct letter after emerging from the embedding becomes perfect (although very long strings were not tested).
  • This happens because the initial context ‘shades’ the representation as shown on the next slide.
slide18

Hidden unit reps in the

network trained on theasymmetrical embeddedsub-grammars.

Representations of same internal sequence

in different sub-grammars is more similar than different sequences in the same subgrammar

- The model is capturing the similarity of nodes across the two sub-grammars

- Nonetheless, it is able to shade these representations in order to allow it to predict the correct final token

importance of starting small
Importance of Starting Small?
  • Elman (1993) found that his 1991 network did not learn a corpus of sentences with lots of embeddings if the training corpus was stationary from the start.
  • However, he found that training went much better if he either:
    • Started with only simple sentences and gradually increased the fraction of sentences with embedded clauses
    • Started with a limited memory (erasing the context) after 3 time steps, then gradually increasing the interval between erasures.
  • This forced the network to ‘start small’, and seemed to help learning.
a failure of replication
A Failure of Replication
  • Rhode and Plaut revisited ‘Starting Small’.
  • They considered the effects of adding semantic constraints.
  • They also used different training parameters; with Elman’s the network appeared to settle into local minima.
grammar and semantic constraints
Grammar and Semantic Constraints

Complex regimen: 75% of sentences contain embeddings throughout.

Simple regimen: Start without embeddings, increment in steps up to 75%

slide22
Rohde and Plaut generally found an advantage for ‘starting big’:
    • Performance was generally better using the final corpus in which 75% of sentences contain embeddings from the start (Complex regimen), compared to starting with only simple sentences and gradually increasing % of embeddings (Simple regimen).
  • An advantage for starting small only occurred when:
    • The final corpus contained embeddings in 100% of sentences.
    • Semantic and agreement constraints between head noun and embedded verb were both completely eliminated (Corpus A’).

A’100

Conditions A-E are ordered by proportion of sentences in which semantic constraints operate between head noun and subordinate clause (from 0 to 1.0).

In A through E above, the subordinate verb always agrees in number with the head noun where appropriate. Not so in A’

discussion
Discussion
  • Specific questions about the SRN:
    • Can it be applied successfully to other tasks?
    • Is its way of representing context psychologically realistic?
    • Can something like it be scaled up to address languages with large vocabularies?
  • More general questions about language
    • Is language acquisition a matter of learning a grammar?
    • Are innate constraints required to learn it?