1 / 82

A Computational Theory of Writer Recognition

Dissertation Defense. A Computational Theory of Writer Recognition. Catalin I. Tomai Department of Computer Science and Engineering. Outline. Problem Domain A Computational Theory of Writer Recognition Algorithms and Representations Pictorial Similarity Examination

kerem
Download Presentation

A Computational Theory of Writer Recognition

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dissertation Defense A Computational Theory of Writer Recognition Catalin I. Tomai Department of Computer Science and Engineering

  2. Outline • Problem Domain • A Computational Theory of Writer Recognition • Algorithms and Representations • Pictorial Similarity Examination • Identification of Document Elements • Complete Information • Partial Information • Extraction of Document Elements • Classifier Combination • Dynamic Classifier Selection • Determination of Discriminating Elements • Pattern Examination and Classification • Writer Verification • Writer Identification • Conclusions

  3. Writer Recognition • Biometrics: • Physiological: face, iris pattern, fingerprints • Behavioral: voice, handwriting • Forensic Sciences: Court testimony: Daubert vs. Merrell Dow (1993 -Supreme Court) • forensic techniques need to be based on testing, error rates, peer review and acceptability • Practiced by Forensic Document Examiners (FDE’s) • Experts perform significantly better than non-professionals [Kam et. al 1994,1997] • Semi-automatic computer-based systems: • FISH (Germany 1970), SCRIPT (Holland, 1994) • Used by: Government agencies (IRS,INS,FBI)

  4. Handwriting Analysis Taxonomy Handwriting Analysis Synthesis Recognition Personality identification (Graphology) Handwriting Recognition Writer Recognition On-line Off-line Writer Identification Writer Verification Natural Writing Forgery Disguised Writing Text Dependent Text Independent

  5. Problem Domain • Writer Recognition • Handwriting Recognition Which writer? 1,…,n Identification Model Yes, Same Writer No, Different Writer Verification Model started Recognition Model

  6. Problem Domain Individuality: no two writings by different persons are identical Variability: no two writings by the same person are identical . . . Writer A . . . . . . Writer B . . .

  7. Problem Domain – Previous Work - partial solutions, no integrated framework - features do not reflect the experience of human examiners - small number of documents/writers - document elements recognition/extraction overlooked

  8. Computational Theory of Writer Recognition Inspired by the Computational Theory of Computer Vision [Marr,1980] 1. Theory: developed based on studies on how human experts discriminate between handwritings of different people Likelihood Ratio Analysis Comparison Evaluation Writer Identity Query Documents 2. Algorithms: pattern recognition/machine learning/computer vision Comparison Analysis Analysis Comparison Evaluation Pictorial Similarity Examination Identification of Document Elements Pattern Examination Classification Determination of Discriminating Elements 2. Representation Textural/Statistical/Structural features Document elements (characters/words) Discriminating Elements (Habits) 3. Hardware/Software

  9. Computational Theory of Writer Recognition Likelihood Ratio Query Documents Comparison Analysis Evaluation Analysis Comparison Pictorial Similarity Examination Identification of Document Elements Determination of Discriminating Elements Pattern Examination Classification Writer Identity

  10. Pictorial Similarity Theory : eliminate dissimilar candidates Representation: handwritten documents = textures Algorithms: • Wavelets - humans process images in a multi-scale way. • Gabor Filters – reasonably model most spatial aspects of the visual cortex DCT Daubechies Haar Antonini … Odagard Pre-processing GLCM 1 60 Gabor filter bank θ – [0,45,90,135] f – [2,4,8,16,32] and combinations … 1 2 … … 28 Feature Vector What is the most descriptive Wavelet Transform/Filter for a given handwriting?

  11. Pictorial Similarity Test Exemplars Database Query Document Query Features Database Approximation Algorithm [GLCM] Decision Fusion/ Find Best Algorithm Query Exemplars Database Return Most Similar Exemplars Query Process Training Process Mean Features Database Classsifier/ Class A3 A1 … A7 A8 A1 Apply algorithms on each image A15 A3 … A2 … … … A28 Train Exemplars Database Ranked Algorithms for each Handwriting Exemplar Features Database

  12. Results Train Set: 1927 images (167 writers) Test Set: 1985 images (173 writers) Adaptive Scheme vs. Best Algorithm Algorithm Set Retrieval Performance

  13. Results Frequency Algorithm

  14. Conclusion • Filtering properties: Allows us to eliminate approx 70% of the test set exemplars (the ones non-similar to the query image). • Performance: outperforms the use of a unique filter for all query images. • Extensibility: more algorithms/decision schemes can be added to the mix. • Limited overhead: by using pre-computed features and mean feature vectors database.

  15. Computational Theory of Writer Recognition Likelihood Ratio Query Documents Comparison Analysis Evaluation Analysis Comparison Pictorial Similarity Examination Identification of Document Elements Determination of Discriminating Elements Pattern Examination Classification Writer Identity Complete Information (Transcript Mapping) Partial Information (Heuristic and Bayesian)

  16. Document Elements Extraction Theory: Extract document elements (characters/words/other) + From Nov 10 1999 Jim Elder …. Complete Information Transcript From Nov ... + Partial Information Mexico No Information + ? From ...

  17. Complete Information-Transcript Mapping From Nov 10 1999 Jim Elder … Build Word-Hypothesis Set . . . Word Recognition Dynamic Programming Next Line/Refine Results From Nov Word Recognition 1999

  18. Transcript Mapping

  19. Transcript Mapping

  20. Results From 3000 document images • almost ½ million word images extracted • more than 2 million character images extracted Error Rate Error Rate

  21. Computational Theory of Writer Recognition Likelihood Ratio Query Documents Comparison Analysis Evaluation Analysis Comparison Pictorial Similarity Examination Identification of Document Elements Determination of Discriminating Elements Pattern Examination Classification Writer Identity Complete Information (Transcript Mapping) Partial Information (Heuristic and Bayesian)

  22. Document Element Extraction-Partial Information Theory: Partial information is available combine heterogeneous information missing data/noisiness interpretability Script Structure Partial recognition results Example: Foreign Mail Recognition (FMR) - extract character/word images from mail pieces sent to foreign addresses Mail Stream

  23. Foreign Mail Recognition

  24. Partial Information-Foreign Mail-Samples

  25. Partial Information – Heuristic Solution PC candidate Position PC- Format HNNCR FKANCE 2:1:4 ddd 2:1:5 invalid WMR Country Confidence PC Format 2:2:5 dd ddddd France 3.04 ddddd Latvia 4.58 dddd 2:1:2 ddddd Greece 5.62 ddddd …. ….. ... REORDER/ELIMINATE Country Candidates Country Confidence PC-Format France 2.04 ddddd Greece 5.62 ddddd ... ... ... 75014 France

  26. Partial Information – Bayesian Solution 4 Structural Features: 2 No Of Lines 3,4,5 No Of Strokes (Lines 1,2,3) 8,9,10,11 Line Length (Lines 1,2,3) 14 IsGap – gap on the last line Script Differentiation Features 12 IsLatin(0/1) 13 IsCKJ(0/1) 3 2 1 Address Block Component Features 15 PostalCodeLength 16 PostalCodeLine – line on which the PostCode is located 18 PostalCodeFormat (e.g. ####,####,#a#a#a, etc) 19 PostalCodeHasDash (e.g. ####-###) • PostalCodeIsMixed(0/1) – is the Postcode of digits only or not 20 PostalCodeCountryOrder (0/1) – is the PostCode located before/after the country name Recognition Features 6 CountryNameFirstLetter (from character recognizers) 7 CountryNameLastLetter (from character recognizers) 21 Continent 22 CountryName

  27. Results • 9270 mail piece images • 22 country destinations (> 30 train set samples) • almost 3090 word images extracted

  28. Computational Theory of Writer Recognition Likelihood Ratio Query Documents Comparison Analysis Evaluation Analysis Comparison Pictorial Similarity Examination Identification of Document Elements Determination of Discriminating Elements Pattern Examination Classification Writer Identity Complete Information (Transcript Mapping) Partial Information (Foreign Mail) Classifier Combination Classifier Selection

  29. Classifier Combination – Decision Level “Ensemble” Information Input • heterogeneity • uncertainty A7 A8 A3…AN e1 e2 … eN C1 C2 A8 A4 … A7 A8 A3…AN CN Dempster-Shafer-based unified framework for heterogeneous ensembles of classifiers s7 s8 s3… sN “Local” information “Global” information Global - [Xu et. al, 1998], [Zhang et. al 2003] Local: [Rogova et. al, 1997] Ensemble: BKS [Huang et. al, 1995] Feature and Classifier: FIM [Tahani et. al, 1990] …

  30. Classifier Combination Motivation: • Current DST-based combination methods use only global/local/ensemble information or combined local + global • Current solutions not always suitable for combining Type-III classifiers (assume we have scores attached to each class) Goals: • Adapt classic DST mass-computation methods for Type-III classifiers • Integrate “ensemble” information into the DS Theory of Evidence • Combine “local”, “global” and “ensemble” classifier information into one unique framework • Estimate impact of affirming uncertainty regarding the top choice (double hypotheses)

  31. Classifier Combination-Adaptation to Type-III Classifiers Computation of Evidences Class Level Classifier Level Distance to a mean vector Use recognition/substitution rates over all classes Method 1 Method 3 Method 2 Membership function Use recognition/substitution rates for each top class For each classifier ekthe sum of masses for all classes add up to 1 Use uncertainty in the combination

  32. Classifier Combination-Integrate BKS in DST - a unit of the K-dimensional knowledge space BKS (Behavior Knowledge Space - no. of training patterns for which T = Cm in - no. of training patterns for the configuration Problem: - as the number of classifiers increases, BKS becomes sparse Solution: - RKS (Reduced Knowledge Space) - addresses sparseness - models joint behavior of recognizers, irrespective of the class - the classifiers set - set of groups of classifiers that agree on the top choice - no of train patterns whose output configurations belong to L RKS - no of training patterns whose output configurations belong to L for which - no of training patterns whose output configurations belong to L for which

  33. Classifier Combination – Unified Framework Decision K classifiers: Output: “Global” performance: “Local” performance: Frame of Discernment: & Dempster-Shafer Combination Algorithm Global information Local information Global information Local information Global information Local information Ensemble information … Component Classifiers C1 C2 … CK Component Classifier C1 Component Classifier C2 Component Classifier CK

  34. Classifier Combination – Methods used s1 - first classifier s2 - second classifier s3 - third classifier X – original method in [Xu et al 1998] P – original method in [Parikh et al. 2001] R1 – original method in [Rogova el al. 1997], cosine measure R2 – original method in [Rogova el al. 1997], Euclidean distance based measure FIM – Fuzzy Integral Method [Tahani et. al 1990] BKS – Behavior Knowledge Space [Huang et. al, 1995] M1 – “global” – recognition, substitution rates for each class M2 – “local” – sum of masses for all classes add up to 1 M3 – “local” – use membership functions instead of distances to mean vectors RKS – Reduced Knowledge Space X+M2 M2 +DH – M2+double hypotheses X+M2+DH – X+ M2 + double hypotheses M3+DH – M3+double hypotheses M2+BKS – M2+ ensemble BPA obtained from BKS M2+RKS – M2+ ensemble BPA obtained from RKS X+M2+BKS – X + M2 + ensemble BPA obtained from BKS X+M2+RKS – X + M2 + ensemble BPA obtained from RKS

  35. Results Original Recognizer Performance Local, Global, Local + Global Information

  36. Results Ensemble Double Hypotheses

  37. Results Local + Global + Ensemble Information

  38. Computational Theory of Writer Recognition Likelihood Ratio Query Documents Comparison Analysis Evaluation Analysis Comparison Pictorial Similarity Examination Identification of Document Elements Determination of Discriminating Elements Pattern Examination Classification Writer Identity Complete Information (Transcript Mapping) Partial Information (Foreign Mail) Classifier Combination Classifier Selection

  39. Classifier Selection- When to use which Classifier? Class Set [Woods et. al, 1997] [Kanai et. al, 1997] … Input [Xue et. al, 2002] … A1,A2,…,AK Dynamic Classifier Selection C1,C2,…,Cn Cj Decision Classifiers “Best” Classifier Goal: Choose the classifier based on the class set (alphabet) size and composition

  40. Classifier Selection Iran Iraq Zair Word Lexicons: Iran Oran ≠ edit-distance • How to measure the confusability of an alphabet? • recognizer confusion matrices [Kanai et al, 1994] , deformation energy of elastic nets [Revow et al, 1996] , image-to-image distances [Huttenlocher et al, 2002], degradation model [Baird et. al 1993] –no “perfect” handwriting sample • Drawbacks: classifier dependency and vulnerability to outliers • Approach: eliminate outliers and variances of shape by looking at the character skeletons size, “confusability” a o O, P D,t Alphabets ≠ c t ? v w

  41. Classifier Selection • Structural Features[Xue et. al, 2002] • Loop, Cusp, Arc, Circle, Cross, Gap - with different attributes: Position, Orientation, Angle Upward Arc Upward Cusp Loop Downward Arc Extract structural features from each character and build a profile HMM model for each character (of different sizes) Delete Insert Insert Begin Match Match End

  42. Classifier Selection HMM model for Digit ’2’ Emission Probabilities for State 3

  43. Classifier Selection Characters Confusability (Similarity) : distance between their corresponding HMM models • measure the probability that two models generate the same sequence Delete Delete Insert Insert Insert Insert Begin Match Match End Begin Match Match End M1 M2 Co-emission probability a + c t v w Alphabet A Alphabet confusability Pairs of Characters Confusability ranking

  44. Performance Train Alphabet Size 5 Alphabet Size 7 Confusability Interval Confusability Interval Test Results obtained using the R1-based confusability measure

  45. Performance Train Alphabet Size 5 Alphabet Size 7 Confusability Interval Confusability Interval Test Results obtained using the HMM-based confusability measure

  46. Computational Theory of Writer Recognition Likelihood Ratio Query Documents Comparison Analysis Evaluation Analysis Comparison Pictorial Similarity Examination Identification of Document Elements Determination of Discriminating Elements Pattern Examination Classification Writer Identity Document Elements Word and Document Features

  47. Character Discriminability • Theory: Individuality is exhibited by writers in the execution of more complex forms • Characters/Words differ in their discriminability power For each character: w1-w1 w2-w2 w3-w3 … wn-wn w1-w2 w2-w3 ... wi-wj (μ,σ) Distance Space

  48. Character Discriminability Discriminability: • Bhattacharya distance • Area below Receiver Operating Curve (ROC) w1-w1 w2-w2 w3-w3 … wn-wn w1-w2 w2-w3 ... wi-wj

  49. Character Discriminability Writer Verification Discriminability ranking of characters by their Bhattacharya distance/ROC area between the SW and DW distance distributions

  50. Word Discriminability Discriminability ranking of the first 25 words of the CEDAR letter - length of word Word Discriminability

More Related