1 / 34

Pattern Recognition with N-Tuple Systems

Pattern Recognition with N-Tuple Systems. Simon Lucas Computer Science Dept Essex University. Overview. Standard Binary n-tuple Dealing with grey-levels Continuous n-tuple Bit-plane decomposition Dealing with sequences Scanning N-Tuple Future Directions. N-Tuple Systems.

dalton
Download Presentation

Pattern Recognition with N-Tuple Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Pattern RecognitionwithN-Tuple Systems Simon Lucas Computer Science Dept Essex University

  2. Overview • Standard Binary n-tuple • Dealing with grey-levels • Continuous n-tuple • Bit-plane decomposition • Dealing with sequences • Scanning N-Tuple • Future Directions

  3. N-Tuple Systems • Bledsoe + Browning (late fifties) • Sample a pattern at m sets of n-points per set • Use each sample set to represent a memory address • Have an n-tuple “bank” for each pattern class • Simple training: • Note address occurrences for each class

  4. What to store • Various options • 1-bit: address occurred or not • Freq – weighted: count number of occurs • Prob. – use count to estimate probability • 1-bit version saturates • Usually better to use probabilistic version (ML estimate)

  5. N-Tuple Architecture

  6. Standard N-Tuple Features • Superfast training • As fast as you can read the data in! • Superfast recognition (ditto) • Simple • Applicable to binary images

  7. Grey-level

  8. Threshold?

  9. Niblack?

  10. Beavis?

  11. Continuous N-Tuple • Samples grey-level image directly • Pre-compiles samples into LUTs • Fills LUT entries with ABS distance to closest sampled point • Recognition speed not compromised • BUT: slower to train • Memory problems… • Not probabilistic • Sensitive to spurious training data!

  12. Continuous N-Tuple Results

  13. Bit-Plane Decomposition • Alternative to continuous n-tuple • Uses a combination of binary n-tuple classifiers • One for each bit-plane (so 8 for 256-grey level) • Good results reported • Speed sacrifice

  14. Scanning N-Tuple Classifier(SNT) • Introduced in 1995 (Lucas, Lucas + Amiri) • Since investigated by other research groups (IBM, Kaist, Kent, Athens) • In a recent study was one of the best classifiers on UNIPEN dataset • Simple modification of n-gram model • An n-gram with gaps!!!

  15. Scanning N-Tuple • Chain code image • Scan sampler along chain code • Estimate weights of address occurrences • Classify by summing weights for each class • Softmax function -> posterior probability • Train • DEMO! 0 2 3 2

  16. Recent Work • Extensive evaluation (IBM) • Directional + bit-plane decomposition (Kent) (smaller tables) • Mixture models for table compression (IBM, KAIST) • Clustering (Athens) • Discriminative Training (Essex) • Better accuracy (why????)

  17. Terminology • m – frequency count • l – log likelihood weights • a – class activation vector • y – output vector (posterior prob.) • t – target vector

  18. Likelihood Score for Class k given Sequence s

  19. Softmax Function • Interpret as posterior probability y_k

  20. Maximum Likelihood Est.

  21. Discriminative Training • Maximise probability of correct classification • Minimise cross-entropy

  22. Cross Entropy Error Term

  23. Weight Update Rule If k = true class Apply weight updates

  24. Cross-Entropy v. ML

  25. Design Process

  26. MNIST Results

  27. Future Work • Improve accuracy further • Mixture Models • Training data deformation models • Better understanding of discrim v. ML • Sparse (e.g. trie) SNT • Optimal (all) threshold version for colour / grey-level images

  28. A 010111000101001 010110100010101 0101010001011 10100101010101 010101010001011 01010101001010101 B 1111011101111101 00010001000001000 00001000100010001 11110111111011111 …. Why Mixture?To tell A from B !!!

  29. Why Opti-Thresh?

  30. Global Mean Threshold

  31. Optimally Thresholded Image

  32. Conclusions • N-Tuple classifiers – fantastic speed • High degree of design skill needed to make them work well • Compete with much more complex systems • Interesting future work to be done!

  33. Further Reading • Continuous n-tuple • Simon M. Lucas , Face recognition with the continuous n-tuple classifier, Proceedings of the British Machine Vision Conference (1997) , pages: 222 -- 231 [pdf] • Scanning n-tuple • Simon M. Lucas and A. Amiri,, Statistical syntactic Methods for high performance OCR, IEE Proceedings on Vision, Image and Signal Processing (1996) , v. 143, pages: 23 -- 30 [pdf] • Simon M. Lucas , Discriminative Training of the Scanning N-Tuple Classifier, International Workshop on Artificial Neural Networks(2003) , pages: 222 -- 229 [pdf] (draft) • Plus many more references in those papers • Search Google for n-tuple and also for scanning n-tuple

More Related