论文标题
情感计算的系统评价:情绪模型,数据库和最新进展
A Systematic Review on Affective Computing: Emotion Models, Databases, and Recent Advances
论文作者
论文摘要
情感计算在人类计算机相互作用,娱乐,教学,安全驾驶和多媒体整合中起关键作用。最近在情感计算领域(即情绪识别和情感分析)方面取得了重大突破。情感计算是基于单峰或多模式数据来实现的,主要由物理信息(例如,文本,音频和视觉数据)和生理信号(例如,EEG和ECG信号)组成。由于多个公共数据库,基于物理的情感识别能够满足更多的研究人员。但是,很难从面部表情,音频,身体手势等方面揭示一个人的内在情绪。生理信号可以产生更精确,更可靠的情感结果。但是,获得生理信号的困难也阻碍了其实际应用。因此,物理信息和生理信号的融合可以提供情绪状态的有用特征,并带来更高的准确性。我们没有专注于一个特定的情感分析领域,而是系统地回顾了情感计算的最新进展,分类法将单峰性情感识别以及多模式情感分析。首先,我们介绍了两个典型的情感模型,然后是用于情感计算的常用数据库。接下来,我们对最先进的单形成影响识别和多模式情感分析进行调查和分类,从其详细的体系结构和性能方面进行调查。最后,我们讨论了有关情感计算及其应用的一些重要方面,并以最有希望的未来方向结论,例如建立基线数据集,多模式情感分析的融合策略以及无监督的学习模型。
Affective computing plays a key role in human-computer interactions, entertainment, teaching, safe driving, and multimedia integration. Major breakthroughs have been made recently in the areas of affective computing (i.e., emotion recognition and sentiment analysis). Affective computing is realized based on unimodal or multimodal data, primarily consisting of physical information (e.g., textual, audio, and visual data) and physiological signals (e.g., EEG and ECG signals). Physical-based affect recognition caters to more researchers due to multiple public databases. However, it is hard to reveal one's inner emotion hidden purposely from facial expressions, audio tones, body gestures, etc. Physiological signals can generate more precise and reliable emotional results; yet, the difficulty in acquiring physiological signals also hinders their practical application. Thus, the fusion of physical information and physiological signals can provide useful features of emotional states and lead to higher accuracy. Instead of focusing on one specific field of affective analysis, we systematically review recent advances in the affective computing, and taxonomize unimodal affect recognition as well as multimodal affective analysis. Firstly, we introduce two typical emotion models followed by commonly used databases for affective computing. Next, we survey and taxonomize state-of-the-art unimodal affect recognition and multimodal affective analysis in terms of their detailed architectures and performances. Finally, we discuss some important aspects on affective computing and their applications and conclude this review with an indication of the most promising future directions, such as the establishment of baseline dataset, fusion strategies for multimodal affective analysis, and unsupervised learning models.