1 / 1

Task Design EX:

Auditory Sentence Context. Visual Probe. PARTICIPANT HEARS:. “I like my coffee with milk and . . .”. PARTICIPANT SEES STIMULUS IN ONE OF SIX CONDITIONS BELOW:. IN CONTEXT WORD. OUT OF CONTEXT WORD. NONWORD. sugar +. table +.

emilia
Download Presentation

Task Design EX:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Auditory Sentence Context Visual Probe PARTICIPANT HEARS: “I like my coffee with milk and . . .” PARTICIPANT SEES STIMULUS IN ONE OF SIX CONDITIONS BELOW: IN CONTEXT WORD OUT OF CONTEXT WORD NONWORD sugar + table + jayer + UPPER VISUAL FIELD + sugar + table + jayer LOWER VISUAL FIELD Introduction We present data from a study looking at top-down modulation of visual areas due to sentence context. Based on the Interactive Activation Model (McClelland and Rumelhart, 1981), word recognition results not only from bottom-up or data-driven processes, but also from top-down modulation due to context and semantics. Even early visual areas may be affected by this contextual modulation. This model was tested using an event-related fMRI paradigm in which sentence context was presented auditorily, and words and nonwords were shown visually. • Task Timeline • Methods • Task • 5 sessions of 24 trials each (no stimuli repeated w/in subject) • Type of trial counterbalanced back in time by condition • Time between probe onsets: 16 sec. • Participants • 7 students, ages 20-26 • Right-handed, English first language, not bilingual. • Imaging • GE Signa 1.5 Tesla research scanner • 1-shot gradient-echo spiral acquisitions, TR=2000 • 26 oblique axial slices, AC-PC aligned, 3.8 mm • Processing • Image reconstruction, movement correction • Analysis • Found visual regions responding to any stimulus shown in upper visual field, and to any stimulus shown in lower visual field. • Within these two activity maps for each subject, In Context and Out of Context trials were compared in a t-test contrast. • In this way, the sensitivity of visual areas to modulation by sentence context was tested. Details below. • 2 regressors convolved with the hemodynamic response function: • Visual Probe in Upper Field(UVF), Visual Probe in Lower Field(LVF) • Regressor maps were formed using the four brain volumes following the probe, and these f-maps (UVF and LVF for each subject) were thresholded separately to obtain visual probe ROIs of at least 4 voxels. Voxel-wise p-values ranged from .05 to .0001.  • Time series from visual probe particles for each subject from the UVF and LVF maps were entered into a post-hoc t-test contrasting In Context and Out of Context conditions.  • Imaging Results • An example of visual activity in an upper visual field • F-map is shown below: • Across all subjects, words shown in context activated visual areas less than words shown out of context (post-hoc t-test trend, p=.23). This indicates that an auditory sentence context can modulate visual activity due to word stimulation. • The resulting time series is shown below: • Although across subjects, this analysis was not significant, further analysis suggests that some subjects do have visual areas that are significantly affected by word context. • Below is a figure representing visual mapping on one subject’s • inflated left hemisphere with the calcarine fissure facing front • right. • The blue line outlines an ROI sensitive to context • (GLM: In vs. Out) • The visual mapping displayed underneath the outlines suggests • that the context modulation occurs within early visual areas • (most likely V1). • Conclusions • Reading of a word is modulated by the context of the sentence in which that word appears. This effect is shown behaviorally in both reaction time and accuracy data. The Interactive Activation Model (McClelland and Rumelhart, 1981) suggests that the context effect is due to top-down effects spreading downwards through the early areas representing words, letters, and even features of words. This top-down modulation interacts with data-driven, bottom-up processes in word recognition. • In an fMRI, slow event-related study, across seven subjects: • There is a trend to visual areas of the brain being modulated by the top-down influence of auditory sentence context. • When a word is shown in context, there is a modulated (diminished) response than when a word is shown out of context. • Although across all subjects, this effect was not significant, several subjects did show significant effects of context within regions activated by words and non-words in the upper or lower visual fields. • Future Plans • Future plans will expand the visual mapping of the context effect, examining the specific visual areas that are modulated by sentence context during word recognition. Plans are also in effect to increase power by running more subjects. Preliminary data suggests that this modulation may occur even at the earliest levels of processing in the visual stream. Attention Tone 0 sec. 18 sec. Brain Volumes TR= 2 sec. Word • Task Design • EX: • Participants were required to fixate on a plus sign in the middle of a screen. Sentences missing the last word were presented auditorily. Following the sentence, a stimulus was flashed parafoveally. The stimulus was one of three types: • a word fitting the context of the sentence (in-context), • an out of context word (out-of-context), or • a nonword • Participants then performed a lexical decision task based on the visual stimulus, and responded by pressing two buttons bimanually. • Extra Information: • Behavioral Pilot Results • During a behavioral pilot of the task (n=8), the context effect was found to be highly significant in reaction time (p<.0001; means IN=786, OC=998). Participants were much faster to respond if the word was shown in context, controlling for location. Location effect and the interaction of location and context were non-significant. There was also a strong trend towards higher accuracy for words shown in context (p=.052), controlling for location. • Design Specifics • Behavior: Words used for the study were balanced for frequency. Word length, imageability, concreteness, familiarity, and number of syllables were within restricted ranges for both sets. Sentence context fit was measured by cloze values. Words were shown 4 degrees visual angle above or below fixation, subtending 2 degrees visual angle in height. • Visual Mapping: Hemifield flashing checkerboard rotating in 18 sec. cycles. F-maps UVF Subject 1 LVF Context t-test (In vs. Out) UVF Subject 2 LVF Etc. Etc. • Behavioral Results • Participants were significantly faster to respond to words shown in context than words shown out of context (ANOVA: Context, p=.02, means: OC/IC 1017.3/971.1ms, n=6*). • There was no significant effect of position on screen and no significant interaction of position by context. Participants were faster to respond to words than to nonwords, but not significantly. • * RT from correct trials only. Missing behavioral data for one subject. Context Modulation 180° VisualMapping Gradient 0° V1 Acknowledgements: Ed DeYoe, Kristi Clark, Clara Gomes, Amna Dermish, Kate Fissell, Jennifer Cochran References McClelland, J. and Rumelhart, D. (1981). “An Interactive Activation Model of Context Effects in Letter Perception: Part1. An Account of Basic Findings” Psychological Review, 88(5), 375-407. Also, see: McClelland, J. and O’Regan, J. (1981). “Expectations Increase the Benefit Derived From Parafoveal Visual Information in Reading Words Aloud” Journal of Experimental Psychology, 7(3), 634-644.

More Related