1 / 28

Face Identification Systems

Face Identification Systems . Face recall systems provide a visualisation of the witness’ memory for a face. Include Identikit, Photofit, sketching, computer systems (e.g. Identikit 2000, FACES 3.0, E-Fit, Evo-Fit). Problem: usually poor at producing recognisable likenesses. .

carr
Download Presentation

Face Identification Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Face Identification Systems

  2. Face recall systems provide a visualisation of the witness’ memory for a face. Include Identikit, Photofit, sketching, computer systems (e.g. Identikit 2000, FACES 3.0, E-Fit, Evo-Fit). Problem: usually poor at producing recognisable likenesses. "The man was a pretty odd-looking character and we didn't get a good look at his face, but he didn't look that odd," Mrs Rule said. "The man in the picture has half an ear - he didn't have half an ear. And his moustache wasn't like that. "I don't think I've ever seen anyone who looks like that in Stalham or anywhere else in my life." She added: "Apparently the problem with the moustache was that the police only had long moustaches on their computer so they had to sort of chop it off at the ends." A spokesman for Norfolk Police said: "The E-fit image is compiled as a result of the witness to the crime giving as accurate and detailed description as possible and how much they are able to recall."

  3. The top ten world's worst photofits (Mirror.ac.uk, 26/11/09):

  4. Ellis, Shepherd and Davies (1975): Target faces Photofits constructed immediately after viewing targets: (Independent raters judged left group to be better likenesses of the targets than right group).

  5. Ellis, Shepherd and Davies (1975): First experiment: Subjects saw two Photofit faces. Reconstructed them using Photofit - very difficult, even when the target Photofit face was present. Second experiment: Subjects saw 6 faces: produced Photofits of them from memory, immediately afterwards. A second group tried to use these Photofits to identify the 6 original faces from a set of 36 faces. Chance matching performance = 3% (1 in 36). Actual success rate was 12.5% - better than chance, but very poor.

  6. Ellis, Shepherd and Davies (1978): First experiment: Participants saw video of a man reading (either 15 seconds or 2.5 minutes long). Attended to the man's face (expecting to make a Photofit of it) OR to the passage (expecting a comprehension test). No benefits from attending to the face. All Photofits were independently rated as being poor likenesses.

  7. Ellis, Shepherd and Davies (1978): Second experiment: Compared Photofit to freehand sketching. Sketches were better than Photofits if the target face was physically present. Photofit was better when the target was reconstructed from memory. Sketches were "child-like in their simplicity and bore only a vague approximation to the original target faces” - Photofits rated little better! Photofit was still a poor likeness even when the face was in view while it was constructed: i.e., an accurate COPY of a face is extremely difficult to produce using Photofit.

  8. Laughery and Fowler (1980): Compared sketching with Identikits. Sketches were rated better overall likenesses than Identikits. Sketches showed a difference in rated likeness between "from memory" and ”target face present" conditions. No difference with Identikit - because the likenesses were so poor.

  9. Verbal descriptions: (Christie and Ellis 1981). Better than Photofits. E-fit: (Davies, van der Willik and Morrison 2000). No better than Photofit when face is constructed from memory - can only produce good likenesses when the face is in view at time of construction.

  10. E-Fits made by trained police operatives with the target face present throughout construction:

  11. Frowd et al (2005, 2007): forensically-relevant evaluations: 2005: compared Photofit, E-Fit, Pro-Fit, Evo-Fit. Witness makes composite of a face that is famous but unknown to them. Police procedures followed, plus realistic delay between seeing face and making composite. Different participant tries to identify the famous face (known to them). Various measures - spontaneous naming most relevant. E-Fit and Pro-Fit best, but only 20% of composites were recognised. 2007: compared E-Fit, Identikit 2000, FACES 3.0. Only two out of 480 composites could be spontaneously identified.

  12. Why is FRS performance so poor? 1. Limitations in eyewitnesses themselves. 2. Equipment limitations (restricted feature-sets). 3. Interference effects from Photofit features. 4. Problems in method of construction. (verbal mediation is difficult due to impoverishment of descriptors, and may be prone to “verbal overshadowing”). 5. Inappropriate "feature-based" theoretical basis (Penry 1971), at odds with the “configural-based” processing actually used for face recognition.

  13. Cheque cards with photographs: (Kemp, Towell and Pike 1999). Cashiers failed to detect mismatches between user and photo on card. Falsely accepted over 50% of fraudulent cards, and falsely rejected over 10% of legitimate ones. Mr. Cruise is here to see you…

  14. Recognition in surveillance videos: (Burton, Wilson, Cowan and Bruce 1999). Subjects judged whether faces seen in high-quality photos had been seen before in video clips. Subjects personally familiar with the targets performed well; subjects unfamiliar with them performed poorly.

  15. Experiment 2: photographs Henderson, Bruce and Burton (2001): Experiment 1: CCTV images

  16. Interference effects from viewing other faces/features: Davies and Christie (1982): Participants who judged similarity of Photofit features to a feature on a target face were no poorer than controls at subsequently recognising the target face. Wogalter (1991): Participants described a face by either (a) spontaneously supplying applicable adjectives, or (b) ticking applicable adjectives in a list of alternatives. Group (b) were poorer at recognising the face than (a). Perhaps distracting adjectives contaminated face memory.

  17. Problems with the method of construction: Verbal description and Photofit construction are sequential, feature-by-feature processes, unlike face representations. Hall (1976): Making verbal descriptions of a face to a sketch artist impaired subsequent recognition of the described face. Christie and Ellis (1981): No correlation between rated quality of a subject's Photofit constructions and the quality of their verbal descriptions.

  18. Verbal overshadowing: Verbally describing a face may impair subsequent recognition of that face - and others (Schooler and Engstler-Schooler 1990; Meissner and Brigham 2001). Elaborative description produces more overshadowing (Meissner and Brigham 2001). Effects cross semantic categories: e.g. describing a face impairs car recognition (but not vice versa) (Brown and Lloyd-Jones 2003). Perhaps verbal description encourages inappropriate processing strategies (e.g. local rather than holistic), which hinder retrieval of face-appropriate information.

  19. Verbal overshadowing and processing orientation: Macrae and Lewis (2002): Participants tried to recognise a "robber" from an 8-alternative photo lineup. Interpolated task - either: (a) identify large letters comprised of small letters ("global" orientation); (b) identify the small letters, ignoring large letters ("local" orientation). (c) control task, reading a passage. (a) > (c) > (b). Overshadowing is produced by an inappropriate processing orientation - not "verbal" as such.

  20. right features, but wrong positions (eyes, mouth) and wrong size (nose) inversion reduces sensitivity to configural information... recognition survives loss of featural information (due to blurring or pixellation) The Thatcher illusion Evidence that face recognition involves more than getting the right features in roughly the right places:

  21. The chimeric face effect: (Young, Hellawell and Hay 1987). Upright faces are processed in an integrated "holistic" way, that prevents easy access to their constituent features.Inversion abolishes this effect.

  22. "Whole over part" advantage: Features are recognised better if they are presented within a whole face than if presented in isolation or within a scrambled face (Tanaka and Farah 1993).

  23. The Bruce and Young (1986) model of face processing: Structural encoding Recognition Expression Facial Speech Age Gender Face Recognition Units Person Identity Nodes Name Generation

  24. Stages in Face recognition: Structural encoding: Based on features, and their configuration (spatial relationship) Face Recognition Unit: Activated by a match to a stored face representation Person Identity Node: Contains semantic information about the person Name Generation

  25. Familiar vs unfamiliar face recognition: Both familiar and unfamiliar face recognition involve simliar types of processing (Collishaw and Hole 2000). But - familiar faces recognised better from internal features, unfamiliar faces from external features. (Ellis, Shepherd and Davies 1979; Young, Hay, McWeeny, Flude and Ellis 1985). Familiar face recognition based on "abstractive" representation compiled from many views; unfamiliar face recognition is more image-based/episodic (Burton et al 2005; Megreya & Burton 2006, 2008).

  26. Conclusions: 1. Face processing involves configural processing; face recall systems need to be sympathetic to this. 2. Future systems could take into account the biological constraints on what can occur in a face - e.g. dolicocephalic versus brachycephalic (Enlow 1982).

  27. 3. Using multiple composites might aid recognition: (a) Brace et al (2006): Presenting witnesses with any four composites (from 8) aided recognition.

  28. (b) Bruce et al (2002): Morphs of 4 witnesses' composites, and sets of four composites, recognised better than individual composites.

More Related