1 / 49

AS EDEXCEL PSYCHOLOGY 2008 ONWARDS

Patman
Download Presentation

AS EDEXCEL PSYCHOLOGY 2008 ONWARDS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. AS EDEXCEL PSYCHOLOGY 2008 ONWARDS COGNITIVE PSYCHOLOGY: UNIT 1

    2. Cognitive Psychology: definitions & key terms used in this approach The cognitive approach relates to mental processes that help us to make sense of the world: these include processes such as perception, language, memory, attention & problem-solving. One way cognitive psychologists think about this approach is by using the information processing model. This is the idea that our senses receive information (input), our brain interprets & tries to make sense of this information (processing), and we then respond to this, usually with a specific type of behaviour (output). The computer analogy is also used in the cognitive approach; this compares the functioning of a computer with that of the human brain to help us understand how mental functions operate. It is assumed that the human brain is like a computer, it is also an information processor: information is input via a keyboard, processed & then stored on the hard-drive (the brain), and various outputs can then be made. Computers also operate using binary coding, human brains operate using a series of electrical impulses, neurons alternating between being positively & negatively charged. Also, like computers humans are thought to have a limited capacity processor, we can only handle a restricted amount or type of information at any one time.

    3. Cognitive Psychology: definitions & key terms used in this approach Both computers & people have powerful processing abilities although computers are better at algorithms (working things out systematically), whereas people are better at heuristics (guesswork). The similarities between human information processing & computer information processing means that we can use computers as models of human thinking. The cognitive approach also studies brain-damaged people, as case studies of people with brain damage allows researchers to see what a person with a certain area of brain damage can do/how they process information with someone without that damage. However, it is unusual for just one part of the brain to be damaged, and unusual for a person with brain damage to be known to researchers before their condition/injury, so it is hard to gauge what their mental abilities might have been before the damage. Memory is vital for normal human functioning, without memory no learning could take place and we would have no sense of personal identity who we are. Memory is usually thought to consist of: encoding, sensory storage, short-term storage, long-term storage & retrieval.

    4. Cognitive Psychology: definitions & key terms used in this approach Encoding the process of transferring information from the senses into a memory trace, I.e., when we learn something, we are encoding information. Sensory storage all information we perceive is held for a very short time in the sensory store while we decide whether to process it further or not; very little information goes beyond this point, unless processed further visual images last for approx. half a second & sound for approx. 2 seconds. Short-term storage this is the next stage from sensory storage, it has a limited capacity, approx. 7 +/-2 & limited duration approx. 15-30 seconds without rehearsal. Long-term storage - this has unlimited capacity & duration (it lasts a lifetime). With rehearsal, information is transferred from short-term to long-term memory. Retrieval this is the process of locating & extracting stored memories so that they can used. Failure to encode, store or retrieve information properly can lead to forgetting.

    5. In depth area of study: Memory & Forgetting Memory: Can you describe & evaluate: Levels of Processing theory. AND: Reconstructive Memory. OR: The Multi-Store Model of Memory. OR: The Working Memory Model Forgetting: Can you describe & evaluate: Cue Dependent Forgetting. AND: Displacement theory. OR: Trace-decay theory. OR: Interference theory.

    6. MEMORY: Levels of Processing Levels of Processing theory (Craik & Lockhart,1972) maintains that memory depends on how we process information; memory is by-product of depth of processing, I.e., how deeply we process information. There are 3 ways we process information: Structural/visual processing we process information according to how it looks, e.g., if a word is in upper or lower case. This is the shallowest form of information processing. Phonetic processing we process information according to how it sounds, e.g., does a word rhyme with another, the sound made by the word. Semantic/deep processing we process information according to meaning, e.g., what is the meaning of the word. This is the deepest form of processing & the one which leads to the greatest recall/recollection. Types of rehearsal according to Craik & Lockhart there are 2 types of rehearsal: type 1 or maintenance rehearsal & type 1 or elaborative rehearsal. Maintenance rehearsal is the most basic type of rehearsal & least effective for recall, it consists of simply trying to remember something by repeating it over & over; only small amounts of information can be recalled for a short time using this method of rehearsal. Conversely, elaborative rehearsal is where information is considered more deeply/semantically, it is given meaning and is therefore more likely to result in a durable memory being laid down.

    7. MEMORY: Levels of Processing Evaluation There is a lot experimental research which supports this theory, e.g., Craik & Tulving (1975). They tested the theory by putting participants into 3 conditions: the structural/visual, phonetic & semantic. All the participants were told the experiment was a test of reaction speed & had to identify either structural, phonetic or semantic properties of words. Participants were then given an unexpected test for recall of the words in the reaction test. Results: structural (is the word in upper case % recall=15; phonetic (does the word rhyme with windy) %recall=35; semantic (is the word a kind of food % recall=70%). Physiological evidence (Nyberg, 2002) shows that semantic information results in more brain activity, which could be an indicator of deeper processing. It has practical applications, e.g., to enhance learning & revision. Nordhielm, 1994, found that viewers remembered adverts better if they processed them semantically; Riding & Rayne, 1998, showed that students learn better when they process information semantically. It can better explain the diversity/complexity of human memory, I.e., memory is not simply a matter of different types of stores; it can explain whys some memories are better recalled than others.

    8. MEMORY: Levels of Processing Evaluation (continued) Maybe the nature of the task, i.e., what is to be recalled is more important than depth of processing. Morris (1977) found that lists of words were better recalled if they were processed phonetically rather than semantically. Other factors can also affect how well-remembered information is independently of depth of processing, e.g., Reber et al. (1994) showed that the emotional content of words affected recall; similarly distinctiveness & vivid imagery can improve recall, but these are independent of depth of processing. This makes depth of processing hard to define; is it the elaboration of information or the relevance & distinctiveness of the information? In research into LofP, such as Craik & Tulving (1975), how can we be sure that in the structural/visual condition participants were not processing the information semantically, or that the words had some emotionally meaning/distinctiveness for individual participants: this reduces the experimental validity of the research testing the theory. The definition of deep processing cannot be identified independently of its effects on recall; we recall more because of deep processing-deep processing leads to better recall, it is circular logic, the theory describes rather than explains how memory works. The theory can only explain explicit memories, not implicit ones, things that we dont consciously encode but seem to remember anyway.

    9. MEMORY: Reconstructive memory Bartlett (1932) proposed that remembering involves looking at units of memory called schemas: these are mental scripts or packets of information that we have for every aspect of human life, some of these schemas are inherent, such as grasping, but some are learned through experience. E.g., through experience of going to restaurants we develop a restaurant scheme, how to behave in a restaurant; or may have a schema for boyracers what they will be like, what sorts of cars they drive & how they drive them. Bartlett developed his theory famously using a Native American story called The War of the Ghosts. He noticed that Western participants, when asked to recall the story after they had read it, made lots of errors. He concluded that because the story was far removed from Western experiences & schemas, this made accurate recollection of the story difficult for Western participants unfamiliar with Native American culture. Bartlett argued that the schemas we have about certain situations can lead us to make recall errors, we open up an existing schema which we be stereotypical and based on preconceived notions and therefore flawed. We tend to remember information which confirms our stereotypes and ignore information which contradicts it.

    10. MEMORY: Reconstructive memory (continued) According to Bartlett, we reconstruct memories from relevant schemas & make use of the information in them, e.g., we witness a car accident, see that 1 of the drivers is a young man, & immediately open our boyracer schema to help us reconstruct the memory of the accident. In the War of the Ghosts story we make use of our own Western, ghost, war & death schemas to help us interpret & recall the story, in this case inaccurately. In The War of the Ghosts mistakes recalling the story involved: rationalisation adding new material/justifications for actions which are not in the original story; omissions; changes of order; alterations in importance; distortions of emotion incorporating own feelings & attitudes in to the story.

    11. Memory: Reconstructive memory Evaluation There is some experimental support for the reconstructive memory theory. Allport & Postman (1947) conducted a classic experiment showing white participants a picture of a scruffy white man holding a knife to a well-dressed black man, attempting to rob him. When asked to describe the scene some time later, many participants reversed the scenario, with the black man holding the knife. As racism was commonplace in the US at the time, the explanation is that many of the participants relied on their schemas of white & black citizens to aid their recall of the picture; the schema being that black men more likely to behaving aggressively & criminally. Brewer & Treyens (1981) asked participants to wait in an office for 35 seconds, they were then asked, unexpectedly, to recall the items in the office. Most of the items in the office were consistent with an office schema, e.g., table, filing cabinets & there were some schema inconsistent items: a skull & a brick. The participants made lots of substitution errors, omitting things & saying things were there when they werent, but which where consistent with a general office schema, just not that particular office. Participants also tended to remember the skull but not the brick. The participants had used their office schemas to help them recall but this had lead to some errors. The brick wasnt recalled well because it is not especially schema inconsistent, the skull was well-recalled because it is very inconsistent with an office schema.

    12. Memory: Reconstructive memory Evaluation (continued) Carli (1999) showed that participants memories become more stereotypical because of schemas. Participants were either told a story which ended abruptly, or one which ended with a rape. In the latter scenario participants tended to have a more distorted recollection of events in the story than the first; the character committing the rape was described in more sinister terms prior to the rape by the participants in the 2nd condition. Furthermore, memories of new experiences tend to be less distorted than memories of more familiar experiences. Presumably this is because when recollecting novel experiences we cannot use schemas to help us retrieve the memory because we have no schemas for that experience, thus supporting the role played by schemas in memory retrieval. This theory can explain why we remember events in a distorted, inaccurate way; however, it only explains the retrieval aspect of memory, but there are many other features of memory, e.g., the different types of memory that seemingly exist (e.g., the multi-store model can explain short & long-term memory). Neither can it explain why some information is better recollected than others (LofP deeper/semantic processing leads to better recall).

    13. Memory: the Multi-Store Model (Atkinson & Shiffrin, 1968/71) This theory suggests memory is made up of 3 different types of store: Sensory memory store (SM) is a buffer for all information we perceive with our senses. This store holds information for a very short time until we decide whether to process that information further, if it is processed further that information it gets transferred to: Short-term memory store this is a limited capacity store for attended information, with a capacity of approx. 7+/-2 and duration of approx 18-30 seconds. Information is encoded acoustically. If information in the STM is rehearsed sufficiently it gets transferred to: Long-term memory store this has potentially unlimited capacity and life-long duration, information is encoded semantically.

    14. The Multi-Store Model

    15. Memory: the Multi-Store Model Evaluation There is much supporting evidence for the existence of different types of stores, I.e., Miller (1956) noted the difference in capacity between STM & LTM, indicating different types of memory store. Brown & Peterson (1958) showed that without rehearsal memories of trigrams could only be retained for a very short time. Cunitz & Glanzer (1966) showed that the primacy & recency effect or serial position curve can be explained using Multi-store model. Words at the beginning of a list are well-remembered they have been rehearsed and gone into LTM; words at the end are also well-remembered they are still available in STM; but words in the middle are lost because they have not been rehearsed sufficiently enough to go into LTM and are beyond the duration of STM. Case studies of brain-damaged people, e.g., Clive Wearing & HM have shown that people with organic brain-damage often have relatively intact STM performance but poor LTM functioning. However, case studies of brain-damaged people have shown that LTM is not affected in a uniform way, I.e., there are different types of LTM, e.g., procedural, episodic, semantic, and they are not all affected in the same way. Thus memory is more complex than suggested by this theory. There is also evidence from brain-damaged patients to suggest that STM encodes information according to meaning, not just acoustically.

    16. Memory: Working Memory model (Baddeley & Hitch, 1974) This is an alternative to the notion of STM supplied by Atkinson & Shiffrin. In the working memory model STM is seen as consisting of several systems that deal with different types of information: Phonological loop: deals with verbal information, especially its rehearsal: the inner voice Primary acoustic store: essentially the inner ear. Visuo-spatial scratchpad: deals with visual & spatial information: the inner eye. Central executive: directs the flow of information to the relevant system above.

    17. Memory: Working Memory model (Baddeley & Hitch, 1974): Evaluation Seems to face validity, we do seem to be able to picture things, & although saying things over & over again is a common way to remember information, it is not the only way. Participants using one of the systems (e.g., inner voice) for two different sets of information struggle to recall; however, when the same two sets of information are encoded using different parts of system, e.g., inner voice & inner eye, recall is much better. This supports the theory suggesting that STM is made of more than one type of system & is more complex than implied by the Multi-store model. The Multi-store model often relies on experiments using letters, words & numbers; however, the working memory model can test other facets of memory, it is more detailed & dynamic. The role of the central executive is underdefined & vague, how does it allocate information to each system & select what information is to be attended to? This is not clear.

    18. Forgetting: Cue Dependent Theory This theory states that we forget things because we do not have appropriate cues to retrieve these memories. The encoding specificity principle (Tulving) is related. This states that the greater the similarity between the encoding event & retrieval event, the better recall will be, e.g., encoding & recalling in the same place, or being in the same emotional state at encoding & recall. The Tip of the Tongue Phenomenon (Brown & McNeill, 1966) is often used to support the theory, e.g., we often cannot remember something until we are given some relevant prompt, or cue. There are 2 types of cue dependent forgetting: Context-dependent forgetting & State-dependent forgetting. Context-dependent forgetting: This refers to being in the same location/context at encoding & recall in order to improve recall. The environment can also provide context cues, e.g., music (Smith, 1985); smells (Schab, 1990, chocolate; Aggleton & Waskett, 1999, smelly museum study, Jorvik Viking Centre, York). State-dependent forgetting: This refers to the our emotional & physical state at encoding & recall, e.g., happy, sad, intoxicated, fearful, exhausted.

    19. Forgetting: Cue Dependent Theory: Evaluation There is a lot of experimental support for both types of cue dependent forgetting: state & context dependent forgetting. Smith (1979) context=participants given a list of 80 words to learn in a distinctive basement. Next day asked to recall in same location or 5th floor room which was very different. Recall in same location=18/80 in different location 12/80; others recalled in different location asked to imagine themselves back in original room, recall=17/80. Godden & Baddeley (1975) divers study; Abernethy (1940) classroom setting) Environmental context supporting studies: Smith (1985) music: quiet, Mozart or Jazz; Grant & Bredahl (1998) noisy or quiet conditions; Schab (1990) smell of chocolate as a cue; Herz (1997) smell of peppermint, osmanthus & pine: all studies show memory performance better when cues at encoding are present at recall. State-dependent forgetting supporting studies: Duka et al. (2000) alcohol/placebo conditions=best recall when in same state at recall as encoding; Goodwin et al. (1969) same effect with heavy drinkers; Eich (1980) same effect with range of other drugs incl. marijuana; Lang et al. (2001) found similar cue effects when a fearful emotional was induced at encoding & recall; Miles & Hardman (1998) found that a physiological state induced by aerobic exercise acted as a powerful state-dependent cue to recall.

    20. Forgetting: Cue Dependent Theory: Evaluation (continued) There is much face validity: it explains the tip-of-the-tongue phenomena & why when we are given cues, like going back to old house, or hearing a familiar record, triggers lots of old memories. There are many practical applications, e.g., memory can be improved by introducing context or state cues, e.g., crime reconstructions, remembering exam material by imagining you are back in youre the same place you revised. However, it cannot explain why some memories are better remembered than others, e.g., emotionally charged memories might be remembered well even without cues and why we generally remember happy material better than unhappy material (depressed people tend to remember unhappy material more than happy events).

    21. Forgetting: Trace-decay theory This theory states learning causes a physical change in the neural networks in the brain responsible for memory. This change occurs at the synapses or gaps between neurones. When this change occurs a memory trace (or engram) is laid down. This trace becomes stronger through repetition and rehearsal. Forgetting occurs when the memory trace is not strengthened by practice, then the trace begins to break up and disintegrate. Disuse and the passage of time inevitably leads to the disintegration of a memory trace. As STM has limited duration trace-decay is inevitable & very quick; however, trace-decay takes longer in LTM because the memory trace is stronger, more secure & profound physical change occurs at the synapse. The analogy is with a pathway over grass, i.e., if shortcut is frequently used eventually a pathway through the grass will be laid down, but if it not used the grass will begin to grow over the pathway.

    22. Forgetting: Trace-decay theory Evaluation There is some evidence for physiological changes at the synapses when learning occurs which is consistent with trace-decay theory. However, this theory cannot explain why we can recall things that we have not thought about for a long time; presumably the physical changes at the synapses would have decayed and the memory trace dissipated. It cannot explain why we can retrieve old or lost memories with the appropriate state or context cue. Jenkins & Dallenbach (1924) had 2 groups of participants learn a list of words: 1 group recalled the words after a nights sleep (no interference); the other group learned the words at the beginning of the day & recalled the words at the end of the day. The group who learned the words before going to bed & recalled the list the next morning recalled more words than the other group presumably because they experienced less interference with their memories. However, according to trace-decay theory both groups should have had the same level of forgetting because passage of time & disuse leads to forgetting, not amount of interference experienced, and as both groups experienced the same passage of time & lack of rehearsal the level of forgetting should have been the same.

    23. Forgetting: Interference theory In LTM one explanation for forgetting is Interference; this is when forgetting occurs because of interference or confusion between old & new memories. This does not mean the memory is lost, as in trace-decay theory, but that it becomes confused or distorted as a result of conflicting memories. There are 2 types of interference: retroactive & proactive. Retroactive interference is when later/newer or more recently acquired memories interfere with the recall of earlier memories/learning: e.g., because you have a new girlfriend, when you see an ex-girlfriend you cannot remember her name because of retroactive interference the name of your current girlfriend confuses or distorts your earlier memory of names. Proactive interference is when earlier learning/memories interferes with later/newer or more recently acquired memories: e.g., you call your current girlfriend by an ex-girlfriends name.

    24. Forgetting: Interference theory Evaluation There is a lot of research evidence supporting interference theory in LTM. E.g., see earlier study by Jenkins & Dallenbach (1924); and McGeoch & McDonald (1931): they gave participants two lists of information to learn the more similar the second set of information was to the first set, the greater the level of interference & the worse the recall of the first list of information. Also Baddeley & Hitch (1977) studied 2 groups of rugby players, 1 group had played all the games that season, one group because of injury - had missed lots of games. The first group were worse at recalling the names of all the teams they had played in the season because they had a higher degree of interference than the players who missed lots of games through injury so had played less and had less interference. However, interference only accounts for a certain type of forgetting; we forget things even when there is limited interference. Just because there is only 1 word for forgetting, it does not mean there is only type of forgetting.

    25. Forgetting: Displacement Theory This is an explanation of forgetting in STM. STM has a limited capacity, so when that capacity is reached (I..e, 7+/-2) new information displaces older information stored in STM, e.g., item number 10 displaces or replaces item number 1 in your STM. Displacement seems to make sense for STM, but it hard to distinguish between displacement, interference & trace-decay as an explanation of forgetting in STM, i.e., item number 1 might have been lost due to displacement, but equally could have been lost due to trace-decay (it had been some time since it was rehearsed STM has limited duration and so an engram or memory trace had disintegrated); or all the items after number 1 had caused interference and so prevented recollection of that item. NB., lots of research into memory involves learning lists of random words, numbers & trigrams, and so lacks a certain amount of validity.

    26. Studies in detail: Can you describe & evaluate

    27. Godden & Baddeley (1975) divers study: cue dependent forgetting Name: Godden & Baddeley (1975) Aim: To investigate whether a natural environment can act as a cue for recall; to ascertain if encoding & recall in the same context/environment improves recall. Method: 18 divers were randomly allocated to 1 of 4 conditions: 1. learn & recall on dry land; 2. Learn underwater & recall underwater; 3. Learn on dry land & recall underwater; 4. Learn underwater recall on dry land. Participants were also given a recognition test of the words. Generalisability: Although the study involved divers, the concept under investigation can be generalised; memory is a universal cognitive function. Reliability: As the research was a field experiment it is harder to control confounding/extraneous variables & having a standardised procedure is more difficult, e.g., very difficult to control diving & weather conditions. Application to real life: there are many practical applications, e.g., better to learn tasks, such as CPR in the same environment as they are likely to be carried out. Abernathy (1940) found

    28. Godden & Baddeley (1975) divers study: cue dependent forgetting Results: Recall was 50% higher when it took place in the same context/environment as encoding (learning). 40% more words were forgotten if recall took place in a different environment to learning. A change of environment between encoding & recall had no effect on the word recognition test. NB., word recall is much harder than word recognition; thus we need more cues to prevent forgetting for recall rather than recognition. Conclusion: The results suggest that environment/context does act as cue for recall; we forget more readily of we do not have contextual cues. Application to real life contd: that students scored improved if they recalled information in their usual classroom. Can be used to help police interviews & eye witness testimony (EWT),i.e., going back to scene of crime to interview witness. Validity: it was a field experiment so high in ecological validity, I.e., measuring behaviour in a real world situation. The 4 separate conditions ensured that context was likely to be the cause of greater recall, I.e., encoding & recall took place both underwater & on dry land & in both conditions where encoding & recall took place in the same location recall was higher. Ethics: there are no ethical issues as such, informed consent was obtained, no deception or distress was involved.

    29. Peterson & Peterson (1959): the duration of short-term memory (STM) Name: Peterson & Peterson (1959) Aim: To investigate how information is acquired by Long-term memory (LTM); STM duration & how information passes from STM into LTM. Method: It was a laboratory experiment; 24 participants were presented with 48 nonsense consonant trigrams, e.g., CFD & were asked to recall these trigrams. However, participants were asked to recall after intervals of between 3,6,9,12,15 or 18 secs. To prevent Ps rehearsing the trigrams they were asked to count backwards in 3s from a set number, e.g., 293. Results: After 3 secs approx. 80% of trigrams were correctly recalled, this fell to 50% after 6secs, 30% after 9secs 7 after 18secs fell to under 10%. Conclusion: STM has a duration limited to average 20 secs (18-30secs): without rehearsal memory quickly fades. This supports the trace decay theory of memory & also the existence of different types of memory store STM & LTM & so supporting Multi-Store Model of Memory. Generalisability: The study involved 24 undergraduate psychology students, but can still be generalised to the wider population because the concept under investigation memory is a universal cognitive function: we all have a memory. Reliability: It was a well-controlled study, so easy to repeat. Lots of research suggests the limited duration of STM. Application to real life: we all have to remember information using STM & LTM in our everyday lives. Validity: It lacks ecological validity for 2 reasons: we rarely have to recall nonsense syllables & so is unrealistic; the distracter task counting backwards in 3s & so requiring high concentration does not reflect the type of interference experienced in the real world. Ethics: There are no real ethical issues.

    30. Craik & Tulving (1975): Levels of Processing Experiment Name: Craik & Tulving (1975) Aim: To test the Levels of Processing THEORY (Craik & Lockhart) by analysing recall rates after different levels of information processing. Method: A laboratory experiment 24 participants shown 60 words via a tachistoscope, which allows visual material to be presented under conditions of very brief exposure, & asked questions about the words requiring either structural (visual), phonetic or semantic processing. Participants were then given a recognition task, where they were asked to recognise the 60 words from a list of 120 (the 60 original words & a further 120 new words). Generalisability: See comments for Peterson & Peterson, Godden & Baddeley above. Reliability: See comments for Peterson & Peterson above. Application to real life: In real life we encode & process information at different levels according to its relevance & meaningfulness. Validity: Experimental validity was good because Craik & Tulving did not tell participants they would be asked to recognise words later; therefore the study was testing incidental learning through depth of processing, i.e., they were not consciously using other types of memory techniques to ensure they would remember more words. However, ecological validity was low because of the task itself. In real life we are

    31. Craik & Tulving (1975): Levels of Processing Experiment (continued) Results: 17% recognition for structural (visual) processing; 36% for phonetic processing; & 65% for semantic processing. Conclusion: Deeper, i.e., semantic processing, leads to better recognitions & so better recall, supporting the Levels of Processing theory of memory. Validity contd: not usually confronted with lists of unassociated words for a very short period of time; we encode & process information at different levels but not usually this kind of information, i.e., the task is unrealistic & presents a simplistic view of memory, ignoring the role of imagery & emotion that are often linked to LTM (e.g., Morris et al., 1977). Ethics: There are no real ethical issues, the deception/lack of fully informed consent was minor, unlikely to cause personal distress & necessary for the experimental validity of the study to avoid demand characteristics.

    32. 1 Key Issue in Cognitive Psychology

    33. Eye Witness Testimony Levels of Processing, Reconstructive memory, Rehearsal, Cue-Dependency Research evidence suggests that EWT is unreliable: Loftus & Palmer (1975) showed that changing a verb used to describe an accident had a dramatic effect on estimations of speed made by participants witnessing video footage of a car accident (e.g., collide, bump, smash.), I.e., the influence of post-event information. Similar studies include Loftus & Zanni: Did you see the broken headlamp/a broken headlamp (Nb., definite, indefinite article). Weapons effect Loftus (1979) showed that when a weapon is involved witnesses often concentrate on the weapon & do not focus on the characteristics of the assailant. Is this due to stress of situation involving a weapon or unusualness of situation involving a gun? Reconstructive memory suggests we use schemas to help us interpret events which may reduce accuracy. Cue & state dependent theories of forgetting suggests we need the right cues to recall accurately. Is EWT really unreliable? Much of the research conducted into EWT is based on laboratory experiments & therefore lacks ecological validity: real life events happen quickly, are confusing & generate intense emotions. Participants in lab. Experiments know they have to pay attention & so are already cued for attention. Also the type of questioning in lab. Experiments do not reflect the importance & intensity of police questioning. Later studies have tried to use field experiments, e.g., Yarmey, 2004 In EWT research slides & video footage is often used this lacks experimental realism as the emotion & involvement of a real incident are not achieved. A lot of EWT research lacks population validity; the participants are often drawn from undergraduate students, so are not necessarily representative of the general population in terms of age, personality types etc.

    34. Eye Witness Testimony Pickel (1998) showed participants a video clip of a man entering a hairdressers holding either: scissors, a raw chicken, a wallet or a gun. The objects represented either high or low unusualness or high/low threat or both. Participants could identify the man from a line-up but were poor at remembering the object he was holding the more unusual (for that situation) it was: indicating that the stress/arousal of the situation involving the gun was not necessarily the cause of poor memory, but it was the unusualness of the situation that adversely affected memory. Schema theory & Stereotyping: EWT can also be explained using schema theory (reconstructive memory) as the above study shows. People are often heavily influenced by their schemas see reconstructive memory, Brewer & Treyens & Allport & Postman. Research population is not necessarily homogenous (representative of all parts of society). Even Loftus has shown that eyewitnesses can only be mislead about peripheral details of an incident, not necessarily central details, e.g., when asked leading questions about the colour of a stolen purse, participants were not mislead about the colour of it. Yuille & Cutshaw showed that in real life situations EWT can be very accurate. They studied statements given 4 months after witnessing a shooting & found that recall was accurate & not affected by leading questions. Smith & Elsworth (1987) showed that if a witness does not trust the interviewer, or believes the interviewer has no knowledge of the incident, they will not be influenced by leading questions.

    35. Flashbulb Memories A special form of memory we have evolved that enables us to remember particularly distinctive events in detail (in evolutionary terms it gives us a survival advantage). Events such as the death of Princess Diana, The Twin Towers attacks & the London Bombings are events we might recall in a lot of personal detail. Brown & Kulik (1977) found that participants had very detailed & specific memories for events such as the assassinations of Martin Luther King, John & Robert Kennedy & the deaths of relatives. We tend to have flashbulb memories for events that are more personal to us, e.g., Brown & Kulik found that 75% of black participants reported flashbulb memories for the assassination of Martin Luther King, compared to only 33% for white participants. Personal relevance/consequences are important for accurate flashbulb memories. Conway et al. (1994) showed that 86% of UK participants had a flashbulb memory of the resignation of Margaret Thatcher 11 months after event, compared to only 29% of participants from other countries. Flashbulb memories are not a special form of memory at all. Neisser & Harsch (1992) asked students to report how they learned about the Challenger Space Shuttle disaster 1 day after the event & 3 years after the event. When asked 3 years later to recount how they learned of the disaster, no one produced an entirely accurate report (compared to the one produced a day later) & over 1/3 produced a completely inaccurate report even thought hey thought it to be very accurate. Similar findings were reported by Wright (1993) about the 1989 Hillsborough football disaster 5 months after the event. Neisser argues that flashbulb memories are not really distinct memories but are simply more vivid because they are likely to be recalled fairly often & have much media coverage. Furthermore, vividness of recall is not the same as accuracy of recall.

    36. The Cognitive Interview The cognitive interview is a technique used by the police to try to elicit more accurate recall from an eyewitness, and is based on psychological research into memory. There are 4 basic component of a cognitive interview: recreating the context/environment of the incident (mentally); reporting very detail, however seemingly irrelevant; reporting the incident in different orders; reporting how others may have viewed the incident. It is a more open & less interrogatory form of interviewing than traditional police interviews. It is designed to provide as many cues to recall as possible. It prevents memory contamination by asking open - not leading questions, in an attempt to avoid memory reconstruction by introducing new information as a result of leading questions which may trigger pre-existing schemas. Fisher et al. (1985) reported that recall was much better using this technique: 42 items compared to hypnosis, 38 items (but with confabulation) and standard interview, 29 items. Gieselman (1984) that the cognitive interview produced 35% more information than the standard interview, with no difference in error rates. However, some argue that recall may actually be inaccurate due to witnesses being asked to recall from anothers perspective, leading to witnesses guessing what someone might have seen. Some research has also shown that the cognitive interview fails to provide significant improvement in recall compared to standard interviews & generates greater errors in recall (Memon, 1997). The cognitive interview may not be suitable for children because of their limited linguistic skills; this means that they might not be able to respond appropriately to the cognitive style of questioning.

    37. The Cognitive Interview Studies such as Loftus & Palmer illustrate how just a change of a verb can affect eyewitness recall, therefore, a technique such as the cognitive interview, that takes into the account the power of words is clearly needed. The cognitive interview technique is based on well-controlled laboratory experiments, which were replicated & found to be reliable, e.g., Loftus & Palmer. Milne (1997) found that the cognitive interview did not seem to lead to the recall of more material than any other technique, contradicting other research (see previous slide) Memon et al. (1997) found little difference in recall when asking witnesses to recall from different places/sequences in the witnessed event than recalling normally. The enhanced version of the cognitive interview (Fisher & Gieselman, 1992) includes many different features, so it is hard to know which elements are effective.

    38. Research Methods/How Science Works & Practical As with the social approach, you will need to know a range of scientific terminology & be able to describe & evaluate a range of different cognitive psychology research methods. You will also need to conduct and keep a record of a short practical, based on principles from cognitive psychology.

    39. Experiments: Laboratory, Field & Natural

    40. Hypotheses See Social Psychology: Hypotheses can be one tailed (directional) or two tailed (non-directional). This means: 1 tailed hypothesis=makes a definite prediction about the direction of results e.g., participants who are given the same context cue at recall & encoding will recall more words than those who are not. I.e., they will remember more words. 2 tailed hypothesis=something will happen but the direction of the results is not predicted, e.g., giving participants a state cue at encoding recall will have an effect on recall but this effect could improve or reduce recall, it will simply affect recall the 2 tailed hypothesis does not state what this effect will be.

    41. Experimental Control & Variables (NB., only experiments have IVs & DVs) Independent Variable (IV): This is the variable that is manipulated or changed by the researcher it is the thing under investigation e.g., does alcohol affect reaction time? Alcohol is the IV. Dependent Variable (DV): This is the variable that is being measured, or the result of the experiment the DV depends on the IV e.g., reaction time is the DV, it depends on the level of alcohol consumed. Extraneous Variables: Any variable that can influence the DV bit which has nothing to do with the IV, extraneous variables may or may not confound the results. Confounding Variables: This is variable, other than the IV which has confounded the results, i.e., has directly affected the DV but is nothing to do with the IV, e.g., tiredness may have affected reaction time, not alcohol consumption. The researcher should try & control/eliminate confounding variables wherever possible. Situational Variables: The are extraneous variables that could affect the IV which are related to the situation the study was conducted in, e.g., level of noise, weather conditions, heat, level of crowding could all affect the participants performance & so influence the DV; as such they should be controlled/eliminated wherever possible. Participant Variables: These are extraneous variables related to the nature/experiences of the participants themselves, e.g., mood, personality type, skills or relevant experience, fatigue. E.g., reaction time might be affected by a participants level of fatigue or tiredness; a test of alcohol & driving reaction time might be affected by participants driving experience..

    42. Operationalisation In an experiment once the IV & DV have been decided it is important to define or operationalise what the IV & DV is exactly. A precise definition makes the experiment easier to design & DV easier to measure validly. E.g., if your hypothesis was that boys are more aggressive than girls the IV is easy gender the DV would need to be more precisely defined/operationalised i.e., what would count as aggressive behaviour?

    43. Participant Design This refers to how participants are allocated to the conditions in the experiment, e.g., if testing reaction time & alcohol consumption will participants be involved in both conditions, I.e., doing the reaction test sober & after various units of alcohol; or will some participants do the reaction test sober, others after consuming alcohol. Independent Measures Design: Participants are only involved in 1 condition, e.g., one group does the reaction test sober, 1 does it after consuming alcohol. Repeated Measures Design: The same participants are used in both experimental conditions, e.g., participants do the reaction test sober & intoxicated. Matched Pairs Design: Essentially the same as Independent Measures Design, except that participants in both conditions are as equally matched as possible for a quality the researcher thinks is important, e.g., with alcohol & reaction time they might be matched for age, or experience of alcohol.

    44. Participant Design (Continued)

    45. Participant Design (Continued) Order effects, Counterbalancing, Randomisation Order Effects: In a repeated measures design participants take part in all experimental conditions; this may lead to order effects (practice effects or fatigue), where as a result of doing the a similar condition the participants become more practised so perform the 2nd part of the experimental better, or they become more tired as a result of doing something similar before & so a fatigue effect sets in. Both practice & fatigue can affect participants performance and so artificially skew the DV the DV results are influenced by factors other than the IV. Counterbalancing: To counter order effects counterbalancing is used. This is where participants are divided equally between the experimental conditions, e.g., half do condition A first then condition B & half do the reverse, condition B first then condition A. If everyone did condition A then condition B results might be skewed by order effects (practice or fatigue). Randomisation: Very similar to counterbalancing except that participants are allocated to each experimental condition entirely randomly, e.g., tossing a coin or names out of a hat.

    46. Demand Characteristics/Experimenter Effects Demand Characteristics: Human participants may respond to the experimental conditions that they are involved in, they are not passive & may alter their behaviour simply because they are in an experiment. E.g., participants may try & guess the purpose of the experiment & behave accordingly, either according to the their perception of the researchers expectations, or to contradict what they believe the researchers expectations are. Experimenter Bias: This refers to the subtly cues & signals, sometimes completely unconsciously, given by the researcher/experimenter which may influence the reactions & behaviour of the participants. E.g., the researcher may have expectations or personality & behaviour traits that subtly influence the responses given by the participants; thus skewing results the DV is affected by factors other than the IV. E.g., female experimenter asking a male participant about their attitudes towards women.

    47. Types of Validity & Reliability Ecological Validity: how well does a study represent behaviour in the real world, are we measuring how people would behave in the real world, or simply how they would behave in a lab.situation. Construct Validity: how well does the study measure the construct or phenomenon being studied, i.e., how well has the construct of aggression been operationalised in the study. Content/face Validity: does the study seem to, on the face of it, measure what it claims to be measuring. Predictive Validity: how well does the study predict future behaviour, e.g., if a job interview has predictive validity, everyone who did well at the interview would nearly always also be good at the actual job. Test-Retest reliability: If the participants take the test again, maybe a few months apart, will their performance still be similar, or was the original performance a one-off affected by variables on the day other than the IV. Inter-rater reliability: If two researchers rate the performance or score behaviour in the same way, we can be fairly confident that the results are more reliable & consistent than if the scores were rated by just one experimenter, were subjectivity in interpreting results/behaviour might be an issue.

    48. Types of Validity & Reliability Experimental Validity: Does the procedure of the study accurately reflect what is being studied; is the experiment credible (e.g., Milgram), would demand characteristics play a big part in the results (e.g., Zimbardo). Population Validity: Are the participants a good representative sample of the target population, or are they drawn from one particular type of people, e.g., all strongly religious, or all students. Concurrent Validity: Are the results from the behaviour being studied in line with other measures of the same behaviour, e.g., new test of reading ability should be in line with other measures of reading age, if it has concurrent validity. Equivalent Forms reliability: Instead of re-testing participants some time later, simply give them another form or version of testing the behaviour under investigation, and see if the results between the two types of tests are the same. E.g., to test IQ you might use more than one type of IQ test or measure of intelligence.

    49. Research Methods/How Science Works: The Practical (see also Social Psychology) Choose an appropriate design: repeated, independent & matched pairs why that design? What will be your procedure? Note the ethical implications: is the cue material ethical, I.e., not illegal drugs, do participants leave feeling positive about themselves & psychology, are debriefed, not caused distress etc., Analysing results: mean, median, mode, range, standard deviation. Graphical representation: Bar graph, frequency graph, histogram. Was the experimental procedure Valid (experimental, ecological, population etc.), Reliable (standardised instructions/procedure, etc.), Generalisable (sample representative, sampling method, random, opportunity etc.)

More Related