论文标题
Cocatt:认知条件的驾驶员注意数据集(补充材料)
CoCAtt: A Cognitive-Conditioned Driver Attention Dataset (Supplementary Material)
论文作者
论文摘要
驾驶员注意力预测的任务引起了研究人员对机器人技术和自动驾驶汽车行业的极大兴趣。驾驶员注意力预测可以在减轻和预防高风险事件(如碰撞和伤亡)中发挥工具作用。但是,现有的驾驶员注意力预测模型忽略了驾驶员的分心状态和意图,这可能会极大地影响他们观察周围环境的方式。为了解决这些问题,我们提出了一个新的驾驶员注意数据集Cocatt(认知条件的注意力)。与以前的驾驶员注意数据集不同,Cocatt包括描述驾驶员的分心状态和意图的人均注释。此外,我们的数据集中的注意力数据使用不同分辨率的眼睛跟踪设备在手动和自动驾驶模式中捕获。我们的结果表明,将上述两个驱动程序状态纳入注意力建模可以提高驾驶员注意力预测的性能。据我们所知,这项工作是第一个提供自动驾驶注意数据的工作。此外,就自主性水平,眼动分辨率和驾驶场景而言,Cocatt目前是最大,最多样化的驾驶员注意数据集。 Cocatt可在https://cocatt-dataset.github.io上下载。
The task of driver attention prediction has drawn considerable interest among researchers in robotics and the autonomous vehicle industry. Driver attention prediction can play an instrumental role in mitigating and preventing high-risk events, like collisions and casualties. However, existing driver attention prediction models neglect the distraction state and intention of the driver, which can significantly influence how they observe their surroundings. To address these issues, we present a new driver attention dataset, CoCAtt (Cognitive-Conditioned Attention). Unlike previous driver attention datasets, CoCAtt includes per-frame annotations that describe the distraction state and intention of the driver. In addition, the attention data in our dataset is captured in both manual and autopilot modes using eye-tracking devices of different resolutions. Our results demonstrate that incorporating the above two driver states into attention modeling can improve the performance of driver attention prediction. To the best of our knowledge, this work is the first to provide autopilot attention data. Furthermore, CoCAtt is currently the largest and the most diverse driver attention dataset in terms of autonomy levels, eye tracker resolutions, and driving scenarios. CoCAtt is available for download at https://cocatt-dataset.github.io.