1 / 39

Object & Face Perception

Object & Face Perception. Outline The problem (for the visual system) Theories of object recognition Evidence (mostly effects of rotation) Reconciliation (maybe all “views” are right)? Beyond viewpoint effects Faces - are they special? Evidence for specialness configuration effects

tracey
Download Presentation

Object & Face Perception

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Object & Face Perception • Outline • The problem (for the visual system) • Theories of object recognition • Evidence (mostly effects of rotation) • Reconciliation (maybe all “views” are right)? • Beyond viewpoint effects • Faces - are they special? • Evidence for specialness • configuration effects • Neurophysiological/neuropsychological evidence • But is this really special (Gauthier)? • Probing configurational information further

  2. Perceiving objects • People have argued that object recognition is the hardest thing the visual system does. • It is tricky because the retinal information available to discriminate one object from another changes dramatically whenever you or the object changes position/orientation. • Object perception research has (mostly) restricted itself to situations where non-shape info. is not useful, and have concentrated on viewpoint constancy.

  3. How do we achieve viewpoint constancy? Two categories of theory; • Viewpoint invariant theories • Recognition By Components (in which information about the 3D structure of the object is extracted from a single view of it). • Viewpoint dependent theories • In which specific views are “stored” and performance is (somehow) based on generalisation from these.

  4. RBC or “geon” theory • The most obvious solution to the problem of how we manage to generalise across viewpoints is to suggest that viewpoint invariant info. can be extracted from a single view. • In RBC, Biederman proposes that under most circumstances we are able to extract a “structural description” of the object • The way it’s parts (its geons) relate to one another.

  5. Objects made of geons Since the structural description is “object centred”, if you can extract one you should show no viewpoint costs when the object is rotated in depth (unless parts appear or disappear). Alas, this is almost never found (even with single geons!)

  6. Peissig et al 2002 • Here’s data from pigeons showing decidedly incomplete generalisation. • Which is affected by the training views they get. • And this is a paper Irv’s on!

  7. Other problems • While Biederman recognises that faces are a problem for his theory, he claims they are a special case, but…..

  8. Evidence for geon theory - Biederman & Bar (1999)Metric vs non-accidental properties

  9. Biederman & Bar (1999) • Found much bigger viewpoint costs for objects which differed in MPs than for those that differed by a NAP • Even though performance was identical if the 1st and 2nd stimulus were at the same viewpoint (suggesting that the changes were equally salient)

  10. However…. • Essentially every other experiment conducted by anyone else has shown substantial viewpoint costs even when the objects are made of geons and have different structural descriptions • This evidence suggests that maybe object recognition is based on generalising from the particular views that you have actually seen previously • This conclusion is supported by the physiological research which has been done (but…)

  11. Logothetis, Pauls & Poggio (1994, 1995) • Showed (using paper clip objects) that monkeys developed IT cells which responded (with fairly narrow tuning) to particular training views • Behavioural generalisation (after training with one view) correlated well with neural generalisation • Cells were position and size invariant, despite their viewpoint specificity Are these results due to using these stimuli?

  12. And… • Showing that an obvious feature permits viewpoint generalisation does not constitute evidence for RBC. • If the objects were different colours, say, then you would get perfect viewpoint generalisation • This suggests that viewpoint costs depend on the stimuli from which the object must be discriminated - the context.

  13. Williams & Hayward 2000 • Tested this possibility • But found the same viewpoint cost in each condition • Of course this is likely to be influenced by task as well

  14. Reconciliation? Vanrie et al 2002 • Suggested there may be two routes to object recognition - one viewpoint invariant and one view specific. Compared performance on A mental rotation task and objects with a NAP Rotation -Invariance activity

  15. But…mental rotation? • Maybe mental rotation is specific to making handedness judgements • And so is not like normal object recognition • And so these results might just be a physiological confirmation of that possibility • Consider the difference between judging the identity of a rotated object and doing a genuine mental rotation task

  16. More reconciliation: Burghund and Marsolek (2000) • Examined priming of object recognition. • Found viewpoint dependent priming when the test object was shown first to the right hemisphere, and viewpoint independent priming when it was presented to the left hemisphere

  17. What causes viewpoint effects? • Simons et al 2002 showed that viewpoint costs are reduced if the subject actually does change viewpoint. • So information from sources other than the image matter

  18. Beyond viewpoint effects • Keane, Hayward & Burke (in press) investigated the kind of information that is most useful for discriminating between objects Found that configuration changes are detected more quickly and more accurately than switch or identity changes Identifying the info that is most useful for telling objects apart might help us understand how they are recognised

  19. Faces • Faces are objects that we are all expert at recognising • And there is evidence that they are processeed in a different way to other objects • As though they are a special case • But you can probably already guess what might be different about recognising faces and telephones, say…. • so guess….

  20. Face effects Negation Inversion

  21. Face effects Negation Inversion

  22. Boutet and Chaudhuri (2001) • You can clearly see two alternating (rivaling) faces when they are superimposed only if they are near upright (not if they are upside down)

  23. Inversion selectively affects configurational information

  24. Inversion selectively affects configurational information

  25. But what does configuration mean? • Maurer et al (2002) define it multiply: • 1st order relationships • The fact that the parts are in the right categorical relationships • 2nd order relationships • Distances between the parts (metric or coordinate relationships) • Holistic processing • Faces are seen as “wholes” (making parts hard to differentiate)

  26. Neural evidence • Neurophysiology • There are cells in IT which respond selectively to faces. • And a part of IT (FFA in humans) that reponds exclusively to faces. • Neuropsychology • Prosopagnosia is the inability to recognise faces, while being able to see normally, otherwise. • Some Agnosics can recognise faces but not other objects. • These dissociations are usually used as evidence for or against a special face area (as we shall see) - but these patients may be differentially insensitive to first order and second order relationships.

  27. But - Enter Isabel • Gauthier and colleagues have argued that faces seem special because they are a rare instance of a subordinate level classification at which we are expert • They have gathered evidence to support this: • Bird and dog experts show activity in the FFA when classifying birds and dogs (or faces!) • When people are trained with novel objects which require subordinate level classification (Greebles), activity in FFA increases as they become more expert.

  28. Meet the greebles • As well as showing the appropriate neural changes, greeble experts (and only experts) show traditional face effects: • Inversion • Recognise parts more easily if they are part of the right greeble • Have trouble recognising the top half if it is paired with the wrong bottom half

  29. But babies recognise faces! • It is well known that newborns preferentially look at faces - suggesting an innate face area. • But they are probably only sensitive to 1st order relationships (they have prosopagnosia!) • There is good evidence that sensitivity to second order relationships develops quite slowly (is incomplete at 7 or 8 - Mondloch et al., 2002) • and young children in fact show inverse inversion effects (Brace et al., 2001) • And so this is not really a problem for Isabel, but suggests that we should be….

  30. Probing configuration Cooper & Wojan (2000) • Asked subjects to match names to faces • Found one-eye-shifted matches faster than two-eye-shifted matches

  31. White 2002

  32. Keane Burke & Hayward • Also examined Cooper and Wojan’s findings, but more completely • Based on results with novel objects

  33. Task

  34. Task

  35. Task Respond: Same or Different

  36. Categorical Changes • 16 pixels moved • Half up, half down & half right, half left

  37. Coordinate Changes • 16 pixels moved • Half up, half down

  38. Identity Changes • Original feature manipulated

  39. Results • UPRIGHT FACES: categorical=coordinate>identity • INVERTED FACES: categorical>coordinate>identity • Coordinate information is as important as categorical information for upright faces • Upside down faces are processed in the same way as novel objects.

More Related