Loading in 2 Seconds...
Loading in 2 Seconds...
Conversation, Vision, and Driving: Experimental Predictions vs. Real-World Crashes. Richard A. Young General Motors Engineering Wayne State University School of Medicine. Acknowledgements. Chris Schreiner, Arizona State University Li Hsieh, Wayne State University
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Conversation, Vision, and Driving: Experimental Predictions vs. Real-World Crashes Richard A. Young General Motors Engineering Wayne State University School of Medicine
Acknowledgements • Chris Schreiner, Arizona State University • Li Hsieh, Wayne State University • Richard J. Genik II, Christopher C. Green, • Wayne State University Medical School • Susan M. Bowyer, John E. Moran, • Norman Tepley, Henry Ford Hospital • Linda Angell, GM Safety Center • Jon Hankey, Luke Neuratter, Virginia Tech • Transportation Institute
Background • Although often casually discussed as though it were a single task, driving is actually several tasks which play out over different levels of functional hierarchy, on different timescales, and often concurrently. • An important issue is the design of driver-vehicle interfaces that support safe and efficient vehicle operation under such multi-tasking conditions, particularly with the addition of secondary in-vehicle tasks (e.g., cell phone conversations). • Little is known about the brain mechanisms underlying primary driving tasks, much less secondary tasks -- although it is likely that there is some functional overlap, and perhaps consequent interference between primary and secondary tasks. • While interference between the manual requirements of driving and of secondary tasks is obvious—dialing a telephone and steering a vehicle both compete for the same hands—less obvious cognitive demands of secondary tasks are becoming a growing concern. • Some simulator studies claim the distracting effects of, for example, a cell phone conversation may be significant. • Such claims require careful investigation as to what might cause such effects in the laboratory, and whether such effects can produce detectable increases in crash rates in real-world driving compared to normal baseline driving.
Society Costs and Health Risks • Motor vehicle crashes cost $231 billion per year in U.S. alone • # 8 leading cause of death (Anderson & Smith, 2003) • # 4 largest public health problem in U.S. • Exceeded only by heart disease, depression and stroke • Driver error is the principal cause in 45% to 75% of crashes (Wierwille et al., 2002) • 38% of all crashes are related to driver distraction (Virginia Tech Transportation Institute, 2005)
Objectives • (1) Compare the behavioral and neural correlates of conversation effects on visual event detection during driving using the same visual event detection paradigm in brain imaging, behavioral testing, and closed-road driving experiments • (2) Evaluate hypotheses that may help explain the discrepancy between the predicted crash rates from such experimental studies and the observed real-world airbag-deployment crash rates during conversation on a mobile phone embedded in a vehicle.
Methods • Previous laboratory and closed-road experimental studies, and some epidemiological real-world studies, predict an increase in crashes arising from phone conversations during real-world driving compared to baseline driving. • One proposed underlying mechanism is the possible effect of conversation on detection of visual events. • To investigate this mechanism, the "load" paradigm (Angell et al., 2002; Young et al., 2005b) assessed the effects of conversation on visual event detection during simulated driving in behavioral labs, fMRI and MEG imaging centers, and actual driving on a closed road. • The primary task was to depress a foot pedal in response to a small red light presented to the left or below the driving scene at unpredictable times. The secondary task was to engage in a conversation. • The participant pressed a button to answer a ring tone, and then answered simple auditory questions such as “What is your birthdate?”. fMRI (Young, 2005a) and MEG data (Bowyer et al., 2006) were analyzed to examine the neural substrates of driving with and without conversation, and the behavioral results validated in the lab and closed-load studies. • The predictions of these and other experimental investigations were then compared with a large and complete body of real-world data associating conversation usage on mobile phones embedded in vehicles and crashes severe enough to deploy an airbag (Young, 2001). • Data and analyses are presented to evaluate hypotheses that may help explain the differences between the real-world crash data and experimental predictions.
Methods – Six Sites • Wayne State Medical School fMRI Lab (behavior + brain) • Henry Ford Hospital MEG Lab (behavior + brain) • Wayne State University Speech-Language Lab (static behavior) • GM Milford “Usability Lab” (static behavior) • Virginia Tech Smart Road (closed-road) • Real-World Driving (OnStar)
1. WSU School of Medicine: fMRI Research Facility 1.5 T & 4T Siemens scanner Headphones
2. Henry Ford Hospital: MEG Driving Study • MEG: A technique for localizing sources of electrical activity within the human brain by non-invasively measuring the magnetic fields arising from such activity.
2. MEG Experimental Set-Up Mirror arrangement Subject in MEG Peripheral light event Central light event Subject View
Wayne State University: Speech-Language Neuroscience Laboratory Static Behavioral Driving Test Lab
3. WSU: Speech-Language Neuroscience Laboratory Static Behavioral Driving Test Lab
6. Methods: Real-World Driving (OnStar) • Predictions from these experimental findings were validated against a crash database to see if conversation increases the rate of crashes in the real world when compared to baseline driving. • We examined real-world data from a database of 91 million personal calls from three million drivers over a period of 2.5 years. • We counted the number of cases in which a hands-free cellular call was in progress when an air bag crash occurred. • We then calculated a crash Incident Rate Ratio (IRR) comparing the Incident Rate of an air bag crash while engaged in a call to the Incident Rate of an air bag crash during baseline driving for the same population of drivers.
1. fMRI Results: Dual vs. View Top views of the brain at different slices of a horizontal plane. • Increase in brain activation (colored regions) to DUAL task vs. brain activation from VIEW task. This image shows the brain activation regions specific to the EVENT task when performed in a driving-like context. Saggital (side) view of one “slice” of the brain Color scale for t-score for comparison in brain activation Key: PM (medial pre-motor), AC (anterior cingulate). PC (precuneus), IPL (inferior parietal lobule), PM (medial pre-motor), SPL (superior parietal lobule), PCG (postcentral gyrus), PM (medial pre-motor), Cb (cerebellum).
1-2. Summary Results - Brain • Modulations in the right superior parietal region and visual cortices specifically mediated the baseline reaction time to visual light events. • Conversation in these settings appears to contribute to increased reaction times by reducing brain modulation to visual events in these specific brain regions.
4. Results - Milford • TBD
5. Results – Smart Road Handheld Handsfree On* Handheld Handsfree On* 10 Digit Dial Short Conversation Long Conversation BaseLine
1-5 Overall Experimental Results - Behavioral • Conversation slightly increased visual event reaction times in laboratory and closed-road driving experiments compared to a no conversation baseline, with little or no effect on miss rates. • The short conversation condition had a longer visual reaction time than the long conversation.
6. Results: Real-World Driving (OnStar) –ORAL ONLY – NOT FOR DISTRIBUTION Slide deleted
6. Results: Real-World Driving (OnStar) ORAL ONLY – NOT FOR DISTRIBUTION • Slide Deleted
6 Results – Real World (2) • The absolute frequency of airbag deployment crashes occurring during conversation on a wireless phone embedded in a vehicle is lower than predicted by experimental data. • The relative airbag deployment crash rate is no greater than baseline driving, also contrary to prediction from experimental data.
Discussion • Five hypotheses help explain the discrepancy between the experimental predictions and real crash rates: • The visual reaction time increase arising from conversations in experimental studies may be too small to give rise to a detectable increase in real-world crash rates; • Lab-based event detection does not account fully for similar types of event detection in real driving, let alone for crashes (Angell et al., 2006; Curry et al., 2005; Young et al., 2005); • Conversation may at times mitigate risks such as fatigue; • Portable and embedded cell phones may have substantially different human factors properties; • Drivers may tend to change behavior during calls in ways that reduce net risk: e.g., placing calls in relatively benign driving conditions, reducing risky driving maneuvers, increasing headway, glancing longer to the forward roadway, or increasing glances to mirrors after detecting an event.
Conclusions • The real-world airbag-deployment crash rate during conversations on an a hands-free wireless device embedded in a vehicle is substantially lower than what is predicted by brain-imaging, simulator, closed-road, or epidemiological studies. • Several hypotheses may help resolve the discrepancy between the predicted and observed real-world crash rates. • This study concludes that the claim that the effects of conversation on visual event detection as observed in experiments can accurately predict real-world crash rates in all cases requires careful investigation.
Funding Acknowledgments • The baseline research at Wayne State Medical School was supported by an unrestricted gift from the GM Foundation. • The baseline research at Henry Ford Hospital was supported by NIH/NINDS Grant RO1-NS30914. • The work adding secondary tasks is supported by grants from the Crash Avoidance Metrics Partnership (CAMP) funded by GM and Ford Motor Company to Wayne State Medical School and Henry Ford Hospital. • Current research is funded by Michigan Technology Tri-County Corridor.
References • Angell, L. S., Young, R. A., Hankey, J. M. & Dingus, T. A. (2002) “An evaluation of alternative methods for assessing driver workload in the early development of in-vehicle information systems,” SAE Proceedings, 2002-01-1981, Joint Industry/Government SAE Conference, Wash., D.C. • Angell, L. S., Auflick, L. L., Austria, P. A., Kochhar, D. S., Tijerina, Biever, L. W., Diptiman, T. , Hogsett, J. & Kiger, S. (CAMP) (2006) Driver Workload Metrics Project: Final Report Sponsored by National Highway Traffic Safety Administration, Washington, D.C., November. DOT HS 810 635. • Angell, L. S., Auflick, L. L., Austria, P. A., Kochhar, D. S., Tijerina, Biever, L. W., Diptiman, T. , Hogsett, J. & Kiger, S. (CAMP) (2006) Driver Workload Metrics Project: Final Report - Appendices Sponsored by National Highway Traffic Safety Administration, Washington, D.C., November. DOT HS 810 635. • Curry, R.C., Greenberg, J.A., & Kiefer, R.J. (2005). "NADS versus CAMP closed-course comparison examining 'last second' braking and steering maneuvers under various kinematic conditions", Crash Avoidance Metrics Partnership (CAMP), Contract DTFH61-01-X-00014, Wash., DC, August, DOT HS 809 925. • Bowyer, S., Moran, J., Hsieh, L., Manoharan, A., Young R.A., Malladi, K., Yu, Y-J., Chiang, Y-R., Hersberger, R., Genik, R., & Tepley, N. (2006). "MEG localization of neural mechanisms underlying reaction time to visual events while watching a driving video: Effects of conversation," International Congress Series: New Frontiers in Biomagnetism: Proc. of the 15th International Conference on Biomagnetism, Vancouver, BC Canada, August 21-25, 2006. D. Cheyne, B. Ross, G. Stroink and H. Weinberg (Editors). • Young, R. (2001) "Association between embedded cellular phone calls and vehicle crashes involving air bag deployment," Proc. of Driving Assessment 2001: International Symposium on Human Factors in Driver Assessment, Training and Vehicle Design, Snowmass, Utah, August. • Young, R. A., Hsieh, L., Graydon, F. X., Genik II, R., Benton, M. D., Green, C. C., Bowyer, S. M., Moran, J. E., & Tepley, N. (2005a). "Mind-on-the-Drive: Real-time functional neuroimaging of cognitive brain mechanisms underlying driver performance and distraction," Human Factors in Driving, Telematics and Seating Comfort 2005, SP-1934, Society of Automotive Engineering, Warrendale, PA, April. • Young, R. A., Aryal, B., Muresan, M., Ding, X., Oja, S. & Simpson, S. (2005b), “Road-to-lab: Validation of the static load test for predicting on-road driving performance while using advanced in-vehicle information and communication devices,” Proc. of the Third International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design, Rockport, Maine, July.
Additional Data and Studies • Crash-Avoidance Metrics Partnership (static, closed-road, and on-road)
Mind on the Drive • Metrics for the major aspects of driver visual-manual distractions are well established • Measure eyes-off-road time, hands-off-wheel time, etc. • Cognitivedistraction is not as easy to measure • Cognitive activity may not be reflected in observable behavior • Driver reports of their own distraction are insufficient • Drivers are not always aware that they are distracted • Problem: Can Mind on the Drive be directly measured?
Solution: Brain Imaging • Modern brain imaging techniques may allow direct measurement of cognitive driving distraction • Benefits: • Gives a sound, scientific basis to understanding cognitive aspects of driver distraction • Saves time and expense in designing secondary systems to minimize cognitive distraction • By understanding the fundamental underlying mechanisms of cognitive distraction, testing of in-vehicle systems or principles can be hypothesis-driven, requiring less time and expense to conduct such tests. • Improves public, private, and governmental understanding in this important area • Knowledge of the underlying mechanism may lead to guidelines that reduce driver error
Objectives of Study Reported Today • To establish a baseline foundation for the basic brain events underlying the detection of a light event on a roadway, to the formation of a braking response by the foot. • This baseline foundation is reported here today • This study lays the foundation to understand the effect of the addition of secondary tasks in the vehicle, such as cell phone conversations, on the basic event detection and braking responses.
fMRI at Wayne State Medical School • fMRI = “Functional Magnetic Resonance Imaging” • Allows the researcher to see the activation of different regions of the brain to a high degree of spatial accuracy. • fMRI measures molecular changes associated with blood flow that are in turn associated with neural activity • Pulses a magnetic field into the intact awake brain • Measures the return magnetic response of the brain to that pulsed field.
Benefits of fMRI • Safe and non-invasive • No risk to subjects • High spatial resolution • Identifies the activated brain structures exactly • Can image deep structures in the brain (limbic system) • Supports other imaging techniques • fMRI knowledge can be imported into other imaging solution sets, tying down the brain regions involved • fMRI enhances the findings of other imaging methods, by improving the accuracy and robustness of these other results.
4 + fMRI Experiment: Methods • Stimulus Conditions VIEW Task Alone (No Events) EVENT Task Alone DUAL Task: VIEW + EVENTS Baseline Fixation
fMRI Boxcar Stimulus Presentation • Boxcar design, or alternating blocks of baseline fixation (40 seconds) followed by task condition 1, 2, or 3 (40 seconds), followed by baseline, etc. for four blocks (360 seconds). The timing of the event lights in conditions 1 and 3 is illustrated in the line below. Video Event Lights Time (sec)