论文标题
观看:使用智能手表的嵌入式传感器的行人交叉意图预测
WatchPed: Pedestrian Crossing Intention Prediction Using Embedded Sensors of Smartwatch
论文作者
论文摘要
行人穿越意图预测问题是估计目标行人是否会过马路。最新的技术在很大程度上取决于通过自我车辆的前置摄像头获得的视觉数据,以预测行人的过境意图。因此,在视觉输入不精确的情况下,当前方法的效率显着降低,例如,当行人和自我车辆之间的距离相当大或照明水平不足时。为了解决限制,我们在本文中介绍了基于通过行人的智能手表(或智能手机)收集的运动传感器数据集成的首个行人交叉交叉意图预测模型的设计,实现和评估。我们提出了一个创新的机器学习框架,该框架有效地将运动传感器数据与视觉输入集成在一起,以显着提高预测精度,尤其是在视觉数据可能不可靠的情况下。此外,我们执行了广泛的数据收集过程,并介绍了具有同步运动传感器数据的第一个行人意图预测数据集。该数据集包含255个视频剪辑,其中包括各种距离和照明条件。我们使用广泛使用的JAAD和我们自己的数据集训练了模型,并将性能与最先进的模型进行了比较。结果表明,我们的模型优于当前的最新方法,尤其是在行人和观察者之间的距离相当(超过70米)且照明条件不足的情况下。
The pedestrian crossing intention prediction problem is to estimate whether or not the target pedestrian will cross the street. State-of-the-art techniques heavily depend on visual data acquired through the front camera of the ego-vehicle to make a prediction of the pedestrian's crossing intention. Hence, the efficiency of current methodologies tends to decrease notably in situations where visual input is imprecise, for instance, when the distance between the pedestrian and ego-vehicle is considerable or the illumination levels are inadequate. To address the limitation, in this paper, we present the design, implementation, and evaluation of the first-of-its-kind pedestrian crossing intention prediction model based on integration of motion sensor data gathered through the smartwatch (or smartphone) of the pedestrian. We propose an innovative machine learning framework that effectively integrates motion sensor data with visual input to enhance the predictive accuracy significantly, particularly in scenarios where visual data may be unreliable. Moreover, we perform an extensive data collection process and introduce the first pedestrian intention prediction dataset that features synchronized motion sensor data. The dataset comprises 255 video clips that encompass diverse distances and lighting conditions. We trained our model using the widely-used JAAD and our own datasets and compare the performance with a state-of-the-art model. The results demonstrate that our model outperforms the current state-of-the-art method, particularly in cases where the distance between the pedestrian and the observer is considerable (more than 70 meters) and the lighting conditions are inadequate.