Location

Salt Lake City, Utah

Date

25-6-2015

Session

Session 8 – Hybrid Presentations

Abstract

This study examined differences in the impact of visual-manual and auditory-vocal based radio tuning tasks on field driving performance. Engagement in visual-manual tuning tasks were associated with higher steering wheel reversal rates than baseline driving. Both visual-manual and auditory-vocal based tuning tasks were associated with higher variances in speed maintenance compared to baseline driving. Models were built to utilize driving performance measurements as input to a classifier that aimed to distinguish between the three states (i.e., baseline driving, visual-manual tuning, and auditory-vocal tuning). Baseline driving could be classified from visual-manual tuning at an accuracy of over 99% and from auditory-vocal based tuning at an accuracy of 93.3%. Models could differentiate between the modalities at an accuracy of 75.2 % and between the three classes at an accuracy of 81.2%. Results suggest that changes in driving performance associated with visual-manual based tuning are statistically distinguishable from auditory-vocal based tuning. While not being free of visualmanual demand, tasks that involve auditory-vocal interactions appear to differ from visual-manual in how they impact driving performance.

Comments

Honda Outstanding Student Paper Award, 2nd place

Rights

Copyright © 2015 the author(s)

DC Citation

Proceedings of the Eighth International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design, June 22-25, 2015, Salt Lake City, Utah. Iowa City, IA: Public Policy Center, University of Iowa, 2015: 387-393.

Share

COinS
 
Jun 25th, 12:00 AM

Predicting Secondary Task Involvement and Differences in Task Modality Using Field Highway Driving Data

Salt Lake City, Utah

This study examined differences in the impact of visual-manual and auditory-vocal based radio tuning tasks on field driving performance. Engagement in visual-manual tuning tasks were associated with higher steering wheel reversal rates than baseline driving. Both visual-manual and auditory-vocal based tuning tasks were associated with higher variances in speed maintenance compared to baseline driving. Models were built to utilize driving performance measurements as input to a classifier that aimed to distinguish between the three states (i.e., baseline driving, visual-manual tuning, and auditory-vocal tuning). Baseline driving could be classified from visual-manual tuning at an accuracy of over 99% and from auditory-vocal based tuning at an accuracy of 93.3%. Models could differentiate between the modalities at an accuracy of 75.2 % and between the three classes at an accuracy of 81.2%. Results suggest that changes in driving performance associated with visual-manual based tuning are statistically distinguishable from auditory-vocal based tuning. While not being free of visualmanual demand, tasks that involve auditory-vocal interactions appear to differ from visual-manual in how they impact driving performance.