论文标题
从多通道肌电图数据分类的时空深卷积神经网络分类
A temporal-to-spatial deep convolutional neural network for classification of hand movements from multichannel electromyography data
论文作者
论文摘要
深度卷积神经网络(CNN)呼吁从表面肌电图(SEMG)数据分类手动运动的目的,因为它们有能力从原始数据中执行自动特定于人的特定特征特征。在本文中,我们为多通道SEMG的深CNN提出和评估设计和评估设计的新颖贡献。具体而言,我们提出了一种新型的时间到空间(TTS)CNN体系结构,其中第一层在每个SEMG通道上分别执行卷积以提取时间特征。这是由于每个通道中的SEMG信号都是由一个或一小部分肌肉介导的,其时间激活模式与手势的签名特征相关联。时间层分别捕获每个通道的这些签名特征,然后将其在空间上以连续的层混合以识别特定的手势。实际优势是,这种方法还使CNN易于设计不同的样本速率。我们使用NINAPRO数据库1(以100 Hz采样,27个受试者和52个运动 + REST)和数据库2(40个受试者和40个运动 + REST),以2 kHz采样,以评估我们提出的CNN设计。我们基于基于功能的支持向量机(SVM)分类器,两个来自文献的CNN以及CNN的其他标准设计。我们发现,我们的新型TTS CNN设计在数据库1上可实现66.6%的每类准确性,而数据库2上的67.8%达到了67.8%,而TTS CNN在2%显着性水平上使用统计假设测试的分类器优于所有其他分类器。
Deep convolutional neural networks (CNNs) are appealing for the purpose of classification of hand movements from surface electromyography (sEMG) data because they have the ability to perform automated person-specific feature extraction from raw data. In this paper, we make the novel contribution of proposing and evaluating a design for the early processing layers in the deep CNN for multichannel sEMG. Specifically, we propose a novel temporal-to-spatial (TtS) CNN architecture, where the first layer performs convolution separately on each sEMG channel to extract temporal features. This is motivated by the idea that sEMG signals in each channel are mediated by one or a small subset of muscles, whose temporal activation patterns are associated with the signature features of a gesture. The temporal layer captures these signature features for each channel separately, which are then spatially mixed in successive layers to recognise a specific gesture. A practical advantage is that this approach also makes the CNN simple to design for different sample rates. We use NinaPro database 1 (27 subjects and 52 movements + rest), sampled at 100 Hz, and database 2 (40 subjects and 40 movements + rest), sampled at 2 kHz, to evaluate our proposed CNN design. We benchmark against a feature-based support vector machine (SVM) classifier, two CNNs from the literature, and an additional standard design of CNN. We find that our novel TtS CNN design achieves 66.6% per-class accuracy on database 1, and 67.8% on database 2, and that the TtS CNN outperforms all other compared classifiers using a statistical hypothesis test at the 2% significance level.