1 / 1

ACKNOWLEDGMENTS

Reconstructing Scenes: View-boundaries vs. Object-boundaries Colleen DiCola & Helene Intraub Department of Psychology, University of Delaware. INTRODUCTION. Experiment 1. RESULTS. RESULTS. Stimuli & Apparatus Two versions of 12 scenes were digitally photographed.

gypsy
Download Presentation

ACKNOWLEDGMENTS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reconstructing Scenes: View-boundaries vs. Object-boundaries Colleen DiCola & Helene Intraub Department of Psychology, University of Delaware INTRODUCTION Experiment 1 RESULTS RESULTS • Stimuli & Apparatus • Two versions of 12 scenes were digitally photographed. • The same visible fragment of an occluded object was always adjacent to either a view-boundary or an object boundary (see “watering can fragment” Fig 2). • Black view-boundaries and critical objects (the occluded target object and the occluding object) were “cut out” digitally and placed on transparent layers of a multi-layered graphics display so they could each be moved by the subject using the mouse. (Objects could be moved behind other objects or view boundaries.) • Presentation. Each subject (N=72) viewed 12 trials (½ view-boundary & ½ object-boundary). A trial is shown in Figure 3. • There is a tendency to remember having seen beyond the boundaries of a view (boundary extension [BE];Intraub & Richardson, 1989). • This may facilitate integration of successive views into a coherent representation of a continuous world that is never perceived all at once. • Gottesman & Intraub (2003) showed that BE does not occur at all surrounding boundaries. When the boundaries constitute the edges of the view, BE occurs: however surrounding boundaries that form the edges of an object within the view, do not. Memory for Occluded Objects. The % of original visible pixels remembered in each condition is shown in Figure 6. • Memory for the occluded object. • Results replicated Exp 1. Viewers remembered seeing more of the occluded object in the edge condition (blue bars), than in the object condition (red bars) in both aperture conditions. • As shown in the table, viewers in the small • Viewers remembered seeing significantly more of the object (i.e., more than 100% of the original visible pixels) when it had been occluded by the edge of the picture (blue bars); this did not occur when it had been occluded by another object within the scene (red bars). • Remembering more of the object in the edge condition cannot be attributed solely to BE at that Figure 9. Error bars = .95 confidence interval. 100% indicates accurate memory aperture condition remembered seeing a little more of the object than was originally shown when it was cropped by another object, however, they remembered seeing even more of the object in the view-boundary condition (t (35) = 2.24, p<.05). Boundary Adjustment. As in Exp 1, object extrapolation at a view-boundary cannot be attributed solely to boundary extension of the scene at that border because it occurred on trials in which the boundary was not extended, and objects frequently were shifted away from the view boundary. Boundary extension results followed a similar to pattern as that in Experiment 1. Figure 6. Error bar = .95 confidence interval. 100% indicates accurate memory Figure 1. Object Boundaries E.g., Viewers in their study remembered seeing a greater expanse of grass around the blue towel, but not a greater expanse of towel around the shoe. border; it occurred even in the absence of BE at the critical border (20% of the trials), and often objects were shifted away from the view-boundary (44% of the trials). • Boundary Adjustment • Overall, BE occurred in both the large and small aperture conditions; but was greater in the large aperture condition (t(70)= 6.40, p < .001) . • Boundary placement for each condition is shown in Figure 5. In all conditions, significant extension was obtained at the sides of the view. View Boundaries 10%*** 7%*** Large-Aperture Condition 22%*** 18%*** 12%*** 9%*** 9%*** 12%** • Are view-boundaries special – eliciting extrapolation? There are other boundaries within a view that have similar characteristics, but do not constitute the edges of the view. The same visible part of the “watering can” is shown in Fig 2, occluded by: the edge of an object within the view (Panel A) vs. the edge of the view (Panel B). 8%*** 7%** -5% -3% Small-Aperture Condition 4%** 3% Conclusions 7%*** 4%** 5%* 9%*** Figure 2. -1% -1% • This is the first experiment we know of in which viewers were asked to reconstruct both the borders of a view, and object occlusion within the view. Given the open-ended nature of the task -- the spatial errors that occurred were remarkably consistent: including both BE and object extrapolation. • Memory for occluded objects within scenes was clearly affected by the type of occlusion. Although amodal completion of the occluded object would be expected in both conditions – only occlusion by a view-boundary consistently resulted in memory for an unseen region of the occluded object. This occurred even when the local information at the border was identical in the two conditions (Exp 2). • The potential adaptive value of this distinction is that it helps constrain anticipatory projection of information to contexts in which it is most likely to be useful (e.g., in facilitating integration of one view with the next) – and limits it in contexts that would result in a distortion of object relations within view. • This distinction holds seconds following offset (a time period that is useful during exploration of an environment – whether it holds for long-term retention is yet to be determined. Mean % BE at each boundary (% change in the area of the view is in red) Figure 3. Trial progression depicting an object-boundary scene in the large-aperture testing condition. View Boundary Test. When the scene reappeared after the mask, the target objects were missing and the view-boundaries were either pulled out (as shown in Fig 3) or pushed in (as in Fig 4, right panel) – allowing for both large and small aperture test conditions to avoid bias (see Fig 4). The test included 2 parts: Object boundary Experiment 2 (B) Occluded by edge of the view (A) Occluded by object • Question: Was the outcome of Exp 1 due to differences in the type of occlusion (view boundaries vs. object boundaries) or to the fact that the region beyond the view boundary was always a featureless black field? • Perhaps object extrapolation is encouraged by a featureless black field (inducing imagination), whereas accuracy is helped when the occluding region has details that can serve as landmarks (e.g., as in of the object condition). • To disentangle occlusion-type and the presence/absence of detail on the occluding region, view-boundaries were “stamped” with multiple copies of the occluding object from the object-boundary condition (e.g., the podium in Figure 7). Question • Although the visible area of the occluded object is identical in both conditions – will viewers remember having seen more of the object when it is cropped by the view-boundary than when it is cropped by an object within the view? • At a view-boundary such an error would be predictive (a good “guess” about what one might see in the “next” view). However, in the other case, it would cause a consistent distortion in the spatial relationship of two perceived objects – an error that would seem somewhat maladaptive. • Hypothesis 1: All occlusion is represented similarly in visual memory. Memory for part of the unseen portion of an occluded object will be elicited equally at all occluding boundaries. • Hypothesis 2: View-boundaries and object-boundaries are represented differently. Extrapolation is elicited by a view-boundary – a greater amount of the unseen portion will be remembered when cropped by the edge of a view. Stimulus Large-aperture Small-aperture Figure 4. Edge-Condition Object-Condition 1) Boundary Adjustment: Subjects used the mouse to slide each of the 4 view- boundaries to their remembered locations. 2) Object Placement: The missing objects then appeared at the top of the scene. Subjects used the mouse to place the objects in their remembered locations. Objects could be moved behind another object or behind a view boundary if the subject chose. The visible portion of the object was measured in pixels. The distance of the inner edge of each border was measured with respect to its original placement in the stimulus display. REFERENCES Note: the local information is Identical in both conditions. The only difference is that the podium in either within the view or outside the view (forming part of the view-boundary) Gottesman, C.V. & Intraub, H. (2003). Constraints on spatial extrapolation in the mental representation of scene: view-boundaries vs. object boundaries. Visual Cognition, 10, 875-893. Intraub, H. & Richardson, M. (1989). Wide-angle memories of close-up scenes. Journal of Experimental Psychology: Learning Memory and Cognition, 15, 179-187. ACKNOWLEDGMENTS The authors thank Scott Kay for multi-layered graphics programming. Research was supported by NIMH Grant MH54688. Figure 7.

More Related