1 / 37

Microsoft’s Cursive Recognizer

Microsoft’s Cursive Recognizer. Jay Pittman and the entire Microsoft Handwriting Recognition Research and Development Team. Syllabus. Neural Network Review Microsoft’s Own Cursive Recognizer Isolated Character Recognizer Paragraph’s Calligrapher Combined System. Neural Network Review.

jensen
Download Presentation

Microsoft’s Cursive Recognizer

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Microsoft’s Cursive Recognizer Jay Pittman and the entire Microsoft Handwriting Recognition Research and Development Team MicrosoftTablet PC

  2. Syllabus • Neural Network Review • Microsoft’s Own Cursive Recognizer • Isolated Character Recognizer • Paragraph’s Calligrapher • Combined System MicrosoftTablet PC

  3. Neural Network Review 1.0 -2.3 1.4 1.0 0.1 -0.1 0.6 0.0 0.0 0.8 -0.8 0.0 0.7 • Directed acyclic graph • Nodes and arcs, each containing a simple value • Nodes contain activations, arcs contain weights • At run-time, we do a “forward pass” which computes activation from inputs to hiddens, and then to outputs • From the outside, the application only sees the input nodes and output nodes • Node values (in and out) range from 0.0 to 1.0 MicrosoftTablet PC

  4. TDNN: Time Delayed Neural Network item 6 item 4 item 5 item 1 item 2 item 3 • This is still a normal back-propagation network • All the points in the previous slide still apply • The difference is in the connections • Connections are limited • The input is segmented, and the same features are computed for each segment • I decided I didn’t like this artwork, so I started over (next slide) MicrosoftTablet PC

  5. TDNN: Time Delayed Neural Network item 6 item 4 item 5 item 1 item 2 item 3 item 1 Edge Effects For the first two and last two columns, the hidden nodes and input nodes that reach outside the range of our output receive zero activations MicrosoftTablet PC

  6. TDNN: Weights Are Shared -0.006 -0.006 -0.006 -0.006 -0.006 -0.006 -0.006 0.1372 0.1372 0.1372 0.1372 0.1372 0.1372 0.1372 0.0655 0.0655 0.0655 0.0655 0.0655 0.0655 item 6 item 1 item 5 item 1 item 2 item 3 item 4 Since the weights are shared, this net is not really as big as it looks. When a net is stored (on disk or in memory), there is only one copy of each weight. On disk, we don’t store the activations, just the weights (and architecture). MicrosoftTablet PC

  7. Training • We use back-propagation training • We collect millions of words of ink data from thousands of writers • Young and old, male and female, left handed and right handed • Natural text, newspaper text, URLs, email addresses, street addresses • We collect in over a dozen languages around the world • Training on such large databases takes weeks • We constantly worry about how well our data reflect our customers • Their writing styles • Their text content • We can be no better than the quality of our training sets • And that goes for our test sets too MicrosoftTablet PC

  8. Languages • We ship now in: • English (US), English (UK), French, German, Spanish, Italian • We have done some initial work in: • Dutch, Portuguese, Swedish, Danish, Norwegian, Finnish • We cannot predict when we might ship these • Using a completely different approach, we also ship now in: • Japanese, Chinese (Simplified), Chinese (Traditional), Korean MicrosoftTablet PC

  9. Recognizer Architecture Ink Segments Top 10 List TDNN dog 68 clog 57 dug 51 doom 42 Output Matrix divvy 37 a 88 8 68 22 63 57 4 Lexicon ooze 35 b … 23 4 61 44 57 57 4 Beam Search … … cloy 34 a d g 57 a 88 … o 92 81 51 9 47 20 14 g doxy 29 e o 65 b 13 31 8 2 14 3 3 l b 23 t 12 b t … client 22 l 76 c b 6 g c 86 a 71 12 52 8 79 90 90 t dozy 13 a h a 73 d 17 17 5 7 43 13 7 t 5 o d 92 … g … e o 77 n … 7 18 57 28 57 6 5 g 68 t o 53 16 79 91 44 15 12 t 8 MicrosoftTablet PC

  10. Segmentation midpoints going up tops bottoms tops and bottoms MicrosoftTablet PC

  11. TDNN Output Matrix a b c d e f g h i j k l m n t u v w 0 1 2 3 4 5 6 7 8 9 MicrosoftTablet PC

  12. Language Model • Now that we have a complete output matrix from the TDNN, what are we going to do with it? • We get better recognition if we bias our interpretation of that output matrix with a language model • Better recognition means we can handle sloppier cursive • The lexicon (system dictionary) is the main part • But there is also a user dictionary • And there are regular expressions for things like dates and currency amounts • We want a generator • We ask it: “what characters could be next after this prefix?” • It answers with a set of characters • We still output the top letter recognitions • In case you are writing a word out-of-dictionary • You will have to write more neatly MicrosoftTablet PC

  13. Lexicon Simple node u d Leaf node (end of valid word) s UK A C e r s UK U.S. only UK UK A US s A A C U.K. only C C UK s Australian only A UK 4125 Canadian only n a l y A C C a Unigram score (log of probability) … d 1234 US 4098 … z e r s b … US US US g o s t US l r s c US US o l o u r s UK UK b A A C C a a t d 3159 o g e 3463 3354 t 4125 r u n n t h e a t e r s 952 US 3606 US 4187 r e s T H C US w a l k i n g MicrosoftTablet PC

  14. Clumsy lexicon Issue • The lexicon includes all the words in the spellchecker • The spellchecker includes obscenities • Otherwise they would get marked as misspelled • But people get upset if these words are offered as corrections for other misspellings • So the spellchecker marks them as “restricted” • We live in an apparently stochastic world • We will throw up 6 theories about what you were trying to write • If your ink is near an obscene word, we might include that • Dilemma: • We want to recognizer your obscene word when you write it • Otherwise we are censoring, which is NOT our place • We DON’T want to offer these outputs when you don’t write them • Solution (weak): • We took these words out of the lexicon • You can still write them, because you can write out-of-dictionary • But you have to write very neat cursive, or nice handprint MicrosoftTablet PC

  15. Grammars seconds = digit | "12345" digit; Stop Start Stop MonthNum = "123456789" | "1" "012"; Start MicrosoftTablet PC

  16. Factoids and Input Scope Setting the Factoid property merely enables and disables various grammars and lexica • IS_DEFAULT • see next slide • IS_PHRASELIST • user dictionary only • IS_DATE_FULLDATE, IS_TIME_FULLTIME • IS_TIME_HOUR, IS_TIME_MINORSEC • IS_DATE_MONTH, IS_DATE_DAY, IS_DATE_YEAR, IS_DATE_MONTHNAME, IS_DATE_DAYNAME • IS_CURRENCY_AMOUNTANDSYMBOL, IS_CURRENCY_AMOUNT • IS_TELEPHONE_FULLTELEPHONENUMBER • IS_TELEPHONE_COUNTRYCODE, IS_TELEPHONE_AREACODE, IS_TELEPHONE_LOCALNUMBER • IS_ADDRESS_FULLPOSTALADDRESS • IS_ADDRESS_POSTALCODE, IS_ADDRESS_STREET, IS_ADDRESS_STATEORPROVINCE, IS_ADDRESS_CITY, IS_ADDRESS_COUNTRYNAME, IS_ADDRESS_COUNTRYSHORTNAME • IS_URL, IS_EMAIL_USERNAME, IS_EMAIL_SMTPEMAILADDRESS • IS_FILE_FULLFILEPATH, IS_FILE_FILENAME • IS_DIGITS, IS_NUMBER • IS_ONECHAR • NONE • This yields an out-of-dictionary-only system MicrosoftTablet PC

  17. Default Factoid • Used when no factoid is set • Intended for natural text, such as the body of an email • Includes system dictionary, user dictionary, hyphenation rule, number grammar, web address grammar • All wrapped by optional leading punctuation and trailing punctuation • Hyphenation rule allows sequence of dictionary words with hyphens between • Alternatively, can be a single character (any character supported by the system) SysDict UserDict Leading Punc Hyphenation Trailing Punc Start Final Number Web Single Char MicrosoftTablet PC

  18. Factoid Extensibility • All the grammar-based factoids were specified in a regular expression grammar, and then “compiled” into the binary table using a simple compiler • The compiler is available at run time • Software vendors can add their own regular expressions • The string is set as the value of the Factoid property • One could imagine the DMV adding automobile VINs • This is in addition to the ability to load the user dictionary • One could load 500 color names for a color field in a form-based app • Or 8000 drug names in a prescription app • Construct a WordList object, and set it to the WordList property • Set the Factoid property to “IS_PHRASELIST” MicrosoftTablet PC

  19. Recognizer Architecture Ink Segments Top 10 List TDNN dog 68 clog 57 dug 51 doom 42 Output Matrix divvy 37 a 88 8 68 22 63 57 4 Lexicon ooze 35 b … 23 4 61 44 57 57 4 Beam Search … … cloy 34 a d g 57 a 88 … o 92 81 51 9 47 20 14 g doxy 29 e o 65 b 13 31 8 2 14 3 3 l b 23 t 12 b t … client 22 l 76 c b 6 g c 86 a 71 12 52 8 79 90 90 t dozy 13 a h a 73 d 17 17 5 7 43 13 7 t 5 o d 92 … g … e o 77 n … 7 18 57 28 57 6 5 g 68 t o 53 16 79 91 44 15 12 t 8 MicrosoftTablet PC

  20. DTW • Dynamic Time Warping • Dynamic Programming • Elastic Matching From prototypes From dictionary e l e p h a n t e l p h a n t From user From user MicrosoftTablet PC

  21. Brute Force Matching Matrix of all possible matches t 1 1 1 1 1 1 0 n 1 1 1 1 1 0 1 a 1 1 1 1 0 1 1 Entry from dictionary h 1 1 1 0 1 1 1 User must provide distance function 0 means match 1 means no match p 1 1 0 1 1 1 1 e 0 1 1 1 1 1 1 l 1 0 1 1 1 1 1 e 0 1 1 1 1 1 1 e l p h a n t Entry from user MicrosoftTablet PC

  22. Cumulative Matching Match Scores: 1 1 0 1 Each cell adds its score with the minimum of the cumulative scores to the left, below, and left below. 1 0 1 1 0 1 1 1 0 1 1 1 Cumulative Scores: We start in the lower left corner and work our way up to the upper right corner. 2 1 0 1 The upper right corner cell holds the total cost of aligning these two sequences 1 0 1 2 0 1 2 3 0 1 2 3 MicrosoftTablet PC

  23. Cumulative Matching t 6 6 5 4 3 2 1 n 5 5 4 3 2 1 2 a 4 4 3 2 1 2 3 h 3 3 2 1 2 3 4 p 2 2 1 2 3 4 5 e 1 1 1 2 3 4 5 l 1 0 1 2 3 4 5 e 0 1 2 3 4 5 6 e l p h a n t MicrosoftTablet PC

  24. Alignment Each cell can remember which neighbor it used, and these can be used to follow a path back from the upper right corner t 5 5 5 4 3 2 1 n 4 4 4 3 2 1 2 a 3 3 3 2 1 2 3 h 2 2 2 1 2 3 4 A vertical move indicates an omission in the entry from the user (purple) p 1 1 1 2 3 4 5 e 0 1 1 2 3 4 5 l 1 0 1 2 3 4 5 A horizontal move indicates an insertion in the entry from the user (purple) e 0 1 2 3 4 5 6 e l p h a n t MicrosoftTablet PC

  25. Ink Prototypes 1.0 2.8 0.2 1.8 1.0 1.8 0.1 0.9 0.3 1.8 1.0 1.6 0.2 0.8 1.0 1.8 Ink from prototypes 1.0 1.5 0.1 0.6 1.0 1.6 0.4 2.0 0.1 0.5 1.0 1.4 0.6 2.0 1.0 3.0 0.4 0.4 1.0 1.4 0.9 2.3 0.9 3.2 Ink from user MicrosoftTablet PC

  26. Searching the Prototypes t 4 4 4 4 4 3 2 n 3 3 3 3 3 2 3 We can compute the score for every word in the dictionary, to find the closest set of words a 2 2 2 3 2 3 4 g 1 1 2 2 3 4 5 e 0 1 1 2 3 4 5 l 1 0 1 2 3 4 5 This is slow, due to the size of the dictionary e 0 1 2 3 4 5 6 e l p h a n t MicrosoftTablet PC

  27. DTW as a Stack t 5 5 5 4 3 2 1 If we compute row-by-row (from bottom), we can treat the matrix as a stack t n 4 4 4 3 2 1 2 n a 3 3 3 2 1 2 3 We can pop off a row when we back up a letter a h 2 2 2 1 2 3 4 g p 1 1 1 2 3 4 5 This allows us to walk the dictionary tree e 0 1 1 2 3 4 5 l 1 0 1 2 3 4 5 e 0 1 2 3 4 5 6 Lexicon … … a e l p h a n t … b p h a n t s e l e d g a n t … a … t a o g e t MicrosoftTablet PC

  28. Using Columns to Avoid Memory • If we compute the scores column-by-column, we don’t need to store the entire matrix • This isn’t a stack, so we don’t have to pop back to previous columns • We don’t even need double buffering, we just need 2 local variables • We don’t need to store the simple distance, just the cumulative distance 1.0 2.8 0.2 1.8 1.0 1.8 0.1 0.9 2.8 1.8 1.8 0.3 1.8 1.0 1.6 0.2 0.8 1.0 1.8 1.8 1.6 1.6 1.0 1.5 0.1 0.6 1.0 1.6 0.4 2.0 1.5 0.6 0.6 Locals: 0.5 0.1 0.5 1.0 1.4 0.6 2.0 1.0 3.0 0.5 1.4 1.4 0.4 0.4 0.4 1.0 1.4 0.9 2.3 0.9 3.2 0.4 1.4 1.4 Full Matrix Single Buffer Double Buffer MicrosoftTablet PC

  29. Beam Search We can do column-by-column and row-by-row at the same time if we treat the rows as a tree, with each new row pointing backwards to its parent a 3 h 1 g 2 p 1 g 2 p 2 e 1 e 1 e 2 l 1 l 0 l 1 l 2 e 0 e 1 e 2 e 3 e l p h MicrosoftTablet PC

  30. Why Is It Called a Beam Search? • As we compute a column, we can remember the best score so far • We add a constant to that score • Any scores worst than that are culled • Back in the original cumulative distance matrix, this keeps us from computing cells too far away from the best path (the beam) • Since we are following a tree, culling a cell may allow us to avoid an entire subtree • This is the real savings MicrosoftTablet PC

  31. Out of Dictionary • This is the wrong name: • It should really be called Out of Language Model • Or simply Unsupported • Since letter sequences in the language mode are called “Supported” • We simply want to walk across the output matrix and find the best characters • This is needed for part numbers, and words and abbreviations we don’t yet have in the user dictionary • We bias the output (slightly) toward the language statistics by using bigram probabilities • For instance, the probability of the sequence “at”: • P(at|ink) = P(a|ink) P(t|ink) P(at) • where P(a|ink) and P(t|ink) come from the output matrix • and P(at) comes from the bigram table • We impose a penalty for OOD words, relative to supported words • Otherwise the entire language model accomplishes nothing • The COERCE flag, if on, disables the OOD system • This forces us to output the nearest language model character sequence, or nothing at all • There is also a Factoid NONE, which yields an out-of-dictionary-only recognizer MicrosoftTablet PC

  32. Error Correction: SetTextContext() Goal: Better context usage for error correction scenarios • User writes “Dictionary” • Recognizer misrecognizes it as “Dictum” • User selects “um” and rewrites “ionary” • TIP notes partial word selection, puts recognizer into correction mode with left and right context • Beam search artificially recognizes left context • Beam search runs ink as normal • Beam search artificially recognizes right context • This produces “ionary” in top 10 list; TIP must insert this to the right of “Dict” 1. Dictum 2. Dictum 3. 4. Right Context Left Context “Dict” “” a 0 b 0 e 0 a 57 c 0 c 100 t 100 i 85 i 100 d 100 o 72 6. n 5 a 0 5. 7. MicrosoftTablet PC

  33. Isolated Character Recognizer • Input character is fed via a variety of features • Single neural network takes all inputs • Have also experimented with alternate version which has a separate neural network per stroke count Neural Network 1.0 Input a 1.0 0.1 0.6 0.0 0.8 0.0 MicrosoftTablet PC

  34. Calligrapher • The Russian recognition company Paragraph sold itself to SGI (Silicon Graphics, Incorporated), who then sold it to Vadem, who sold it to Microsoft. • In the purchase we obtained: • Calligrapher • Cursive recognizer that shipped on the first Apple Newton • Transcriber • Handwriting app for handheld computers • We combined our system with Calligrapher • We use a voting system to combine each recognizer’s top 10 list • They are very different, and make different mistakes • We get the best of both worlds • If either recognizer outputs a single-character “word” we forget these lists and run the isolated character recognizer MicrosoftTablet PC

  35. HMMs (Hidden Markov Models) 0.0 0.0 0.0 0.0 0.0 0.0 0.8 0.1 0.0 0.0 0.0 0.2 0.8 0.1 0.0 0.1 0.0 0.6 0.1 0.1 0.0 0.1 0.1 0.1 0.6 0.1 0.1 0.0 0.6 0.1 0.1 0.0 0.2 0.0 0.0 0.1 0.6 0.1 0.2 0.7 0.2 0.1 0.0 0.1 0.0 0.0 3.0 0.1 0.7 0.1 0.1 0.0 0.3 0.3 0.2 0.1 0.1 0.0 0.1 0.0 0.3 0.3 0.2 0.1 0.1 Start with a DTW, but replace the sequence of ink segments on the left with a sequence of probability histograms; this represents a set of ink samples MicrosoftTablet PC

  36. Calligrapher HMM models a Top 10 List … dog 59 d clog 54 Beam Search dug 44 … dag 37 g 54 a 88 o 65 dig 31 b 14 y 23 l 76 doom 29 g 37 c 86 Lexicon a 57 cloy 23 … t 5 d 92 … o 67 clay 18 a g 59 … o g clag 14 b l t 8 b t clug 9 c a t a d o … g e … t MicrosoftTablet PC

  37. Personalization • Ink shape personalization • Simple concept: just do same training on this customer’s ink • Start with components already trained on massive database of ink samples • Train further on specific user’s ink samples • Trains TDNN, combiner nets, isolated character network • Explicit training • User must go to a wizard and copy a short script • Does have labels from customer • Limited in quantity, because of tediousness • Implicit training • Data is collected in the background during normal use • Doesn’t have labels from customer • We must assume correctness of our recognition result using our confidence measure • We get more data • Much of the work is in the GUI, the database support, management of different user’s trained networks, etc. • Lexicon personalization: Harvesting • Simple concept: just add the user’s new words to the lexicon • Examples: RTM, dev, SDET, dogfooding, KKOMO, featurization • Happens when correcting words in the TIP • Also scan Word docs and outgoing email (avoid spam) MicrosoftTablet PC

More Related