1 / 19

SNoW: Sparse Network of Winnows

SNoW: Sparse Network of Winnows. Presented by Nick Rizzolo. Introduction. Multi-class classification Infinite attribute domain User configurable linear threshold unit networks Variety of update rules, algorithmic extensions. Outline. The Basic System Training Algorithmic Extensions

jody
Download Presentation

SNoW: Sparse Network of Winnows

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SNoW:Sparse Network of Winnows Presented by Nick Rizzolo Cognitive Computations Software Tutorial

  2. Introduction • Multi-class classification • Infinite attribute domain • User configurable linear threshold unit networks • Variety of update rules, algorithmic extensions Cognitive Computations Software Tutorial

  3. Outline • The Basic System • Training • Algorithmic Extensions • Testing • Tuning Cognitive Computations Software Tutorial

  4. The Basic System Targets (concepts) Weighted edges, instead of weight vectors Features • SNoW only represents the targets and weighted edges • Prediction is “one vs. all” Cognitive Computations Software Tutorial

  5. activation threshold Update Rules • Winnow – mistake driven • Promotion: if , • Demotion: if , • Perceptron – mistake driven • Promotion: if , • Demotion: if , • Naïve Bayes – statistical Cognitive Computations Software Tutorial

  6. snow –train –I learnMe.snow –F learnMe.net –W 2,0.5,3.5,1:1-3 –z + -s s snow –test –I testMe.snow -F learnMe.net –o winners Command Line Interface snow –train –I <example file> -F <network file> -W <α>,<β>,<θ>,<initial weight>:<targets> -P <α>,<θ>,<initial weight>:<targets> snow –test –I <example file> -F <network file> … Cognitive Computations Software Tutorial

  7. 2, 1, 2, 1 2, 1, 1, 2 1, 1 = 4 1, 1, 1 = 2 2, 1, 2, 1 2, 2 1, 1 = 1 1, 1001, 1006: 1 2 3 2, 1002, 1007, 1008: 1, 1004, 1007: 3, 1006, 1004: 3, 1004, 1005, 1009: 1001 1002 1003 1004 1005 1006 1007 1008 1009 A Training Example 2, 2, 2, 2 2, 2 2, 2, 2 1001, 1005, 1007: Update rule: Winnow α = 2, β = ½,θ = 3.5 Cognitive Computations Software Tutorial

  8. Eligibility -e count:1 • Discarding -d abs:0.1 • “Fixed feature” -f - • Feature conjunctions -g - • Prediction threshold -p 0.5 • Smoothing -b 5 Bells & Whistles Cognitive Computations Software Tutorial

  9. Outline • The Basic System • Training • Algorithmic Extensions • Testing • Tuning Cognitive Computations Software Tutorial

  10. Clouds (Voting) • Multiple target nodes per concept 1 2 3 • Learning still independent • Cloud activation is weighted sum of its targets’ activations • Decreasing function of mistakes gives those weights snow –train –I learnMe.snow –F learnMe.net –W :1-3 –P :1-3 Cognitive Computations Software Tutorial

  11. 1, 1001, 1005, 1007; 1, 3: 1001, 1005, 1007; 1, 3: The Sequential Model • Larger confusion set → lower prediction accuracy • Put prior knowledge in each example Cognitive Computations Software Tutorial

  12. The Thick Separator -S <p>,<n> • Theory is similar to SVM • Increases “margin” • Improves generalization • Almost always improves performance Cognitive Computations Software Tutorial

  13. Constraint Classification -O <+|-> • Promotion / demotion depends on activation comparisons • More expressive than independent update rules • Sometimes difficult to realize performance improvements Cognitive Computations Software Tutorial

  14. Other Training Policies • Regression • Function approximation • (Exponentiated) Gradient Descent • Threshold relative updating • Each update moves hyperplane to example • Single target training and testing -G <+|-> -t <+|-> Cognitive Computations Software Tutorial

  15. Outline • The Basic System • Training • Algorithmic Extensions • Testing • Tuning Cognitive Computations Software Tutorial

  16. Winnow / Perceptron σ Softmax: θ - Ω Sigmoid Functions • How to compare activations from different algorithms? • Sigmoids map activations to (0:1) Naïve Bayes already yields (0:1) Cognitive Computations Software Tutorial

  17. Output Modes -o <below> • accuracy (the default) • Assumes labeled testing examples • Simply reports SNoW’s performance • winners • softmax • allactivations • Example 47 Label: 0 0: 0.70005 4.8475 0.77582*1: 0.30915 3.1959 0.148762: 0.18493 2.5167 0.075420 Cognitive Computations Software Tutorial

  18. Outline • The Basic System • Training • Algorithmic Extensions • Testing • Tuning Cognitive Computations Software Tutorial

  19. The major players: -W <α>,<β>,<θ>,<initial weight>:<targets> -P <α>,<θ>,<initial weight>:<targets> -S <thickness> Tuning • Parameter settings make a big difference • Want to pick the right algorithm • Don’t want to over-fit • Trial & error -W <α>,<β>,<θ>,<initial weight>:<targets> -P <α>,<θ>,<initial weight>:<targets> -S <thickness> -r <rounds> -u <+ | -> tune.pl –train <file> -arch W:<targets> … Cognitive Computations Software Tutorial

More Related