论文标题

多模式界面,用于有效远程运行

Multimodal Interfaces for Effective Teleoperation

论文作者

Triantafyllidis, Eleftherios, McGreavy, Christopher, Gu, Jiacheng, Li, Zhibin

论文摘要

多模式界面的研究旨在提供沉浸式的解决方案并提高整体人类绩效。一个有希望的方向是结合用户和模拟环境之间的听觉,视觉和触觉相互作用。但是,尚无广泛的比较来表明组合audiovisuohaptic界面如何影响对任务绩效的人类感知。我们的论文探讨了这个想法。我们对所有音频,视觉和触觉接口的所有组合如何影响操纵过程中的性能进行了彻底的全面比较。我们评估每个界面组合如何影响研究(n = 25)的性能,该研究包括操纵难度的任务。使用主观,评估认知工作量和系统的可用性以及客观测量,并结合了基于时间和空间精度的指标来评估性能。结果表明,与显示监视器的单眼视图相比,使用立体视觉和VRHMD的立体视觉提高了40%的性能。使用触觉反馈改善了结果,而听觉反馈则提高了约5%。

Research in multi-modal interfaces aims to provide solutions to immersion and increase overall human performance. A promising direction is combining auditory, visual and haptic interaction between the user and the simulated environment. However, no extensive comparisons exist to show how combining audiovisuohaptic interfaces affects human perception reflected on task performance. Our paper explores this idea. We present a thorough, full-factorial comparison of how all combinations of audio, visual and haptic interfaces affect performance during manipulation. We evaluate how each interface combination affects performance in a study (N=25) consisting of manipulating tasks of varying difficulty. Performance is assessed using both subjective, assessing cognitive workload and system usability, and objective measurements, incorporating time and spatial accuracy-based metrics. Results show that regardless of task complexity, using stereoscopic-vision with the VRHMD increased performance across all measurements by 40% compared to monocular-vision from the display monitor. Using haptic feedback improved outcomes by 10% and auditory feedback accounted for approximately 5% improvement.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源