论文标题

自适应用户以用户为中心的多模式互动与可靠和值得信赖的汽车接口

Adaptive User-Centered Multimodal Interaction towards Reliable and Trusted Automotive Interfaces

论文作者

Gomaa, Amr

论文摘要

随着现代车辆最近不断提高的功能,出现了新颖的互动方法,这些方法超出了传统的基于触摸和语音命令的方法。因此,在汽车应用中,对对象选择和参考进行了广泛研究,手势,头部姿势,眼睛凝视和语音已被广泛研究。尽管取得了重大进展,但现有方法主要采用一种单模适中的方法 - 不适合改变用户行为和个体差异。此外,当前的参考方法要么分别考虑这些方式,要么集中于固定情况,而移动车辆的情况则是高度动态的,并且受到安全关键的约束。在本文中,我为一种以用户为中心的自适应多模式融合方法提出了研究计划,以引用移动车辆的外部对象。拟议的计划旨在使用用户观察和启发式方法,多模式融合,聚类,学习模型适应性的学习以及持续学习,以提供以用户为中心的适应和个性化的开源框架,并朝着以人为中心的为中心的人造人工智能迈进。

With the recently increasing capabilities of modern vehicles, novel approaches for interaction emerged that go beyond traditional touch-based and voice command approaches. Therefore, hand gestures, head pose, eye gaze, and speech have been extensively investigated in automotive applications for object selection and referencing. Despite these significant advances, existing approaches mostly employ a one-model-fits-all approach unsuitable for varying user behavior and individual differences. Moreover, current referencing approaches either consider these modalities separately or focus on a stationary situation, whereas the situation in a moving vehicle is highly dynamic and subject to safety-critical constraints. In this paper, I propose a research plan for a user-centered adaptive multimodal fusion approach for referencing external objects from a moving vehicle. The proposed plan aims to provide an open-source framework for user-centered adaptation and personalization using user observations and heuristics, multimodal fusion, clustering, transfer-of-learning for model adaptation, and continuous learning, moving towards trusted human-centered artificial intelligence.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源