slide1
Download
Skip this Video
Download Presentation
Key Dismukes and Ben Berman Human Systems Integration Division NASA Ames Research Center 6 March 2007 Human Factors in Aviation Conference

Loading in 2 Seconds...

play fullscreen
1 / 25

Key Dismukes and Ben Berman Human Systems Integration Division NASA Ames Research Center 6 March 2007 Human Factors in - PowerPoint PPT Presentation


  • 191 Views
  • Uploaded on

Checklists and Monitoring: Why Two Crucial Defenses Against Threats and Errors Sometimes Fail. Key Dismukes and Ben Berman Human Systems Integration Division NASA Ames Research Center 6 March 2007 Human Factors in Aviation Conference. Monitoring.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Key Dismukes and Ben Berman Human Systems Integration Division NASA Ames Research Center 6 March 2007 Human Factors in ' - waylon


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
slide1

Checklists and Monitoring: Why Two Crucial Defenses Against Threats and Errors Sometimes Fail

Key Dismukes and Ben Berman

Human Systems Integration Division

NASA Ames Research Center

6 March 2007

Human Factors in Aviation Conference

monitoring
Monitoring
  • Robert Sumwalt, NTSB: Monitoring & cross-checking are the last line of defense against accidents
  • Three functions:
    • Keeps crew apprised of status of aircraft, automation & flight path
    • Helps crew catch errors
    • Helps detect evolving aspects of flight situation
  • FAA expanded AC 120-71, Standard Operating Procedures, to provide guidance for monitoring procedures
  • Many airlines now designate the pilot not flying as the Monitoring Pilot to emphasize the importance of this role
critical defenses sometimes fail
Critical Defenses Sometimes Fail
  • NTSB: Crew monitoring and challenging inadequate in 31 of 37 accidents
    • e.g., FO failed to challenge Captain flying unstabilized approach
  • Flight Safety Foundation: Inadequate monitoring in 63% of approach & landing accidents
  • ICAO: Inadequate monitoring in 50% of CFIT accidents
  • Honeywell: Poor monitoring in 1/3 of runway incursions
critical defenses sometimes fail continued
Critical Defenses Sometimes Fail (Continued)
  • Dismukes, Berman & Loukopoulos (2006): The Limits of Expertise: Rethinking Pilot Error and the Causes of Airline Accidents:
    • Checklist performance breaks down in many accidents
      • e.g., Houston, 1996: DC-9 crew fails to switch hydraulic boost pumps to high pressure; flaps/slats & gear do not deploy gear-up landing
      • Little Rock, 1999: Last half of Before Landing checklist not completed spoilers not armed and did not deploy
      • Nashville, 1996: Poorly-worded Unable to Raise Gear checklist inappropriate crew action ground spoilers deployed on short final
  • Many factors involved, but effective use of checklists and monitoring might have prevented these accidents
why do checklists and monitoring sometimes fail
Why Do Checklists and Monitoring Sometimes Fail?
  • Too often the assumed answer is:
    • “complacency”
    • “lack of diligence”, or
    • “lack of discipline”
  • These are labels with misleading terminology, not explanations
  • Real answer requires empirical investigation
    • FAA grant for jumpseat observation study of how checklists and monitoring are typically executed in normal operations and the factors leading to errors
jumpseat study preliminary findings
Jumpseat Study: Preliminary Findings
  • Hosted by two major U.S. passenger airlines
  • B737 and Airbus 320 fleets
  • Observations planned with large regional carrier
    • Provide third aircraft type
  • Observe routine revenue flights
    • How checklists and monitoring performed
    • Conditions under which performed
  • Developed cognitive factors observation form
  • Data from 22 flights, all with different crews
    • Each flight involves 10 checklists 220 checklists performed
    • Continuous monitoring demands on crew
checklist use observations good and not so good
Checklist-Use Observations: Good and Not-So-Good
  • Good:
    • Most checklists performed correctly
    • Equipment problems caught with checklists
  • Some instances combine good and not-so-good:
    • Door open during Before Start checklist
    • Captain responded to challenge, saying door light OUT
    • First Officer cross-checked and caught error
  • Not-so-good:
    • 3.1 checklist errors per flight (average)
    • Crews ranged from 0 to 10 errors/flight: standardization issue?
checklist errors observed number observed
Checklist Errors Observed(number observed)
  • Begun from memory - 2
  • Done entirely from memory - 3
  • Responding without looking - 9
  • Responding, then looking - 2
  • Calling out items without looking up from card - 1
  • Item not set properly and not caught - 3
  • Not initiated on time - 8
  • Self-initiated, prematurely - 5
  • Interrupted/suspended, not completed - 4
  • Extra item performed - 1
  • Responded “Set” in lieu of numerical value - 6
  • Flow not performed - 24
flow then check devolved to check only checklist set 24 instances
“Flow-then Check” devolved to Check Only (Checklist-Set)(24 instances)
  • Flight Operations Manual:
    • Scan panel in systematic flow, setting and checking each item in turn, then:
    • Run associated checklist
  • Observed: flows not performed or performed incompletely
  • Items set from checklist rather than flow
  • Intended redundancy is lost, undermining power of flow-then-check to catch threats and errors
why do crews shortchange flow then check procedure
Why Do Crews Shortchange Flow-then Check Procedure?
  • More research required but can speculate:
    • Individuals find it inherently hard to check an action just performed
    • Pilots may not fully understand importance of redundancy streamline procedure for “efficiency”
    • How thoroughly is need for redundancy trained and how rigorously is it checked?
checking without really checking 15 instances
Checking Without Really Checking(15 instances)
  • Examples:
    • Pilot responding to challenge sometimes verbalized response, then looked at item to be checked
    • Pilot verbalized response to challenge without looking at item at all
    • On First Officer Only challenge-response checklist: First Officer read challenges without looking up (relied on memory of having set items?)
    • Item not properly set and NOT caught when checklist performed
laziness is not a useful explanation
“Laziness” is Not a Useful Explanation
  • Answer more complicated:
    • Cognitive processes by which humans process information
  • After performing a checklist hundreds of times:
    • Verbal response becomes highly automatic, disconnected from effortful act of looking at and thinking about the item to be checked
    • “Expectation bias” predisposes individual to see item in normal position, even if not: “Looking without seeing”
    • Does the airline emphasize training/checking that items must be re-inspected, rather than relying on memory of having set item?
good techniques
Good Techniques
  • “Point and Shoot”:
    • Pilot responded to each challenge by pointing or touching the item before verbalizing response
    • Slows down responding, breaks up automaticity checking becomes more deliberate
    • Inherent trade-off between speed and accuracy — which do we want?
      • Unacceptable to preach reliability and reward speed
interrupted checklists not resumed 4 instances
Interrupted Checklists Not Resumed(4 instances)
  • Example: Flight attendant interrupted crew just before last item (crew briefing)
  • Checklist SOP may be poorly designed
  • Prospective memory: Needing to remember to perform an action that must be deferred
  • 1987—2001: Five of 27 major U.S. airline accidents occurred when crew forgot to perform a normal procedural step,

e.g.: forgetting to set flaps to takeoff position, forgetting to arm spoilers for

landing, forgetting to set hydraulic boost pumps to high before landing,

forgetting to turn on pitot heat system

  • Studying causes of prospective memory errors and ways to prevent
    • Human brain not well designed to remember to perform deferred tasks, resume interrupted tasks, or remember to perform tasks out of habitual sequence
    • Countermeasures could be designed
    • NASA funding for this research drastically curtailed
interrupted checklists not resumed 4 instances15
Interrupted Checklists Not Resumed(4 instances)
  • Example: Flight attendant interrupted crew just before last item (crew briefing)
  • Checklist SOP may be poorly designed
  • Prospective memory: Needing to remember to perform an action that must be deferred
  • 1987—2001: Five of 27 major U.S. airline accidents occurred when crew forgot to perform a normal procedural step,

e.g.: forgetting to set flaps to takeoff position, forgetting to arm spoilers for

landing, forgetting to set hydraulic boost pumps to high before landing,

forgetting to turn on pitot heat system

  • Studying causes of prospective memory errors and ways to prevent
    • Human brain not well designed to remember to perform deferred tasks, resume interrupted tasks, or remember to perform tasks out of habitual sequence
    • Countermeasures could be designed
    • NASA funding for this research drastically curtailed
monitoring good news and not so good news
Monitoring: Good News and Not-so-good News
  • Good:
    • Most callouts made according to SOP
    • Several pilots caught errors made by other pilots
  • Not-so-good
    • Average number of monitoring errors/flight: 3.7
    • Considerable variation among crews; range 1 to 6 errors
list of monitoring errors 82 instances
List of Monitoring Errors(82 instances)
  • Undesired aircraft state not challenged - 7
  • Checklist distracted from monitoring - 3
  • Communication error not caught - 2
  • Vertical mode not monitored - 2
  • Callout cued by automation, not independent - 36
  • Level-off not monitored - 7
  • Automation/FMC input not corrected - 7
  • FMC input executed without verification - 9
  • Enroute charts not unfolded - 5
  • TOD not monitored - 1
  • Not looking while calling out mode changes - 2
  • Not looking during engine start - 1
undesired aircraft state not challenged 7 instances
Undesired Aircraft State Not Challenged(7 instances)
  • Example: 3 of 22 flights involved high and fast approaches unstabilized at1000 feet
    • 2 continued to landing, one well over Vref almost to touchdown
    • Time compression snowballing workload
    • Pilot Monitoring (FO) did not make deviation callouts or 1000 foot callout
    • Consistent with flight simulation study by Orasanu, et al.: First Officers sometimes fail to challenge Captains’ flying or challenge only indirectly
inadequate monitoring of automation 60 instances
Inadequate Monitoring of Automation(> 60 instances)
  • Undesired aircraft state caused by incorrect data input & not caught until became serious threat (7 instances); e.g.:
    • Crew entered higher altitude in mode control panel, not noticing vertical mode in ALT ACQ 4100 fpm climb and airspeed decay to 230 knots
  • Input to FMC or MCP not cross-checked by other pilot (9 instances)
    • Most inputs correct, but increased vulnerability to undesired aircraft state
  • Failure to make 1000 feet-to-go call until cued by chime (36 instances)
  • Crew head-down when autopilot leveled aircraft (7 instances), in one instance at wrong altitude
  • Required callouts of automation mode made without looking at relevant aircraft parameters
    • Similar to “checking without looking” — automatic verbal response and expectation bias
inadequate monitoring of automation 60 instances20
Inadequate Monitoring of Automation(> 60 instances)
  • Undesired aircraft state caused by incorrect data input & not caught until became serious threat (7 instances); e.g.:
    • Crew entered higher altitude in mode control panel, not noticing vertical mode in ALT ACQ 4100 fpm climb and airspeed decay to 230 knots
  • Input to FMC or MCP not cross-checked by other pilot (9 instances)
    • Most inputs correct, but increased vulnerability to undesired aircraft state
  • Failure to make 1000 feet-to-go call until cued by chime (36 instances)
  • Crew head-down when autopilot leveled aircraft (7 instances), in one instance at wrong altitude
  • Required callouts of automation mode made without looking at relevant aircraft parameters
    • Similar to “checking without looking” — automatic verbal response and expectation bias
too easy to blame monitoring errors on laziness
Too Easy to Blame Monitoring Errors on “Laziness”
  • Why don’t pilots just follow FOM?
  • “Blame the pilot” explanations ignore reality of:
    • Cognitive mechanisms for processing information
    • Competing demands for attention in actual line ops
    • Design of cockpit automation interfaces
  • Humans best maintain situation awareness when actively controlling system rather than just monitoring
    • Human brain not wired to reliably monitor system that rarely fails
  • Monitoring too often treated as a secondary task
  • Lack of bad consequences when forget to monitor
    • Forgetting “primary” task (e.g., setting flaps) always gives negative feedback; loss of monitoring often has no immediate consequence
countermeasures
Countermeasures
  • Long-term remedy: Re-think roles of pilots and automation and design interfaces from first-principles to match human operating characteristics
    • Difficult, slow, and costly
  • No magic solutions, but can improve situation with tools at hand
    • Skip the lectures; beating up pilots doesn’t change performance
    • Do a better job of explaining nature of vulnerability to error and explaining rationale for procedures that seem redundant and cumbersome
specific suggestions
Specific Suggestions
  • Fast, fluid execution of tasks, normally a mark of skill, but not appropriate for checklists. SLOW DOWN; use techniques such as “point and shoot” to make execution deliberate.
  • Rushing saves, at best, a few seconds and drastically increases vulnerability to error. SLOW DOWN. Treat time-pressured situations as RED FLAGS requiring extra caution.
  • Explain the importance of REDUNDANCY. Establish policy for how each checklist is to be executed, for flow-then-check procedures, and for how monitoring and cross-checking are to be executed.
  • Think carefully about WHEN to initiate cockpit tasks, e.g.:
    • Avoid calling for Taxi checklist when approaching intersection
    • Small changes in managing workload can substantially improve monitoring
specific suggestions continued
Specific Suggestions (continued)
  • ELEVATE monitoring to an essential task
    • SOP should specify WHO, WHAT, WHEN, & HOW
    • EMPHASIZE in training and checking
      • Too often focus on primary task negative reinforcement for monitoring
  • CHECK adherence regularly and provide feedback to pilots
    • Captains and Check Pilots should set the example
  • Periodically REVIEW DESIGN of all normal and non-normal operating procedures
    • Poor design invites errors, e.g.:
      • Silent annunciation of flight-critical items
      • Setting flaps during taxi monitoring
      • Checklists that must be suspended before completion
slide25
This research is supported by the FAA (Dr. Eleana Edens, program manager) and the NASA Aviation Safety Program.

Thanks to Dr. Loukia Loukopoulos for helpful suggestions on the design of this study.

More information on NASA Human Factors Research:

http://human-factors.arc.nasa.gov/ihs/flightcognition

Dismukes, R.K., Berman, B., & Loukopoulos, L. D. (2006).  The Limits of Expertise: Rethinking Pilot Error and the Causes of Airline Accidents. Aldershot, U.K.: Ashgate.

ad