论文标题

CASS:多元时间序列分类的频道感知的自我监督的表示框架

CaSS: A Channel-aware Self-supervised Representation Learning Framework for Multivariate Time Series Classification

论文作者

Chen, Yijiang, Zhou, Xiangdong, Xing, Zhen, Liu, Zhidan, Xu, Minyang

论文摘要

多元时间序列(MTS)的自我监督的表示学习是一项具有挑战性的任务,近年来吸引了日益增长的研究兴趣。许多以前的作品都集中在自我监督学习的借口任务上,通常忽略了MTS编码的复杂问题,从而导致了毫无疑问的结果。在本文中,我们从两个方面解决了这一挑战:编码器和借口任务,并提出了一个统一的渠道意识的自我监督学习框架。具体而言,我们首先设计了一个新的基于变压器的编码器通道感知变压器(CAT),以捕获MT的不同时间通道之间的复杂关系。其次,我们将两个新颖的借口任务结合在一起,下一个趋势预测(NTP)和上下文相似性(CS),用于自我监督的表示学习与我们所提出的编码器。广泛的实验是在几个常用的基准数据集上进行的。实验结果表明,我们的框架可实现新的最新最新,与以前的自我监督的MTS表示学习方法(最高+7.70 \%改进LSST数据集)相比,可以很好地应用于下游MTS分类。

Self-supervised representation learning of Multivariate Time Series (MTS) is a challenging task and attracts increasing research interests in recent years. Many previous works focus on the pretext task of self-supervised learning and usually neglect the complex problem of MTS encoding, leading to unpromising results. In this paper, we tackle this challenge from two aspects: encoder and pretext task, and propose a unified channel-aware self-supervised learning framework CaSS. Specifically, we first design a new Transformer-based encoder Channel-aware Transformer (CaT) to capture the complex relationships between different time channels of MTS. Second, we combine two novel pretext tasks Next Trend Prediction (NTP) and Contextual Similarity (CS) for the self-supervised representation learning with our proposed encoder. Extensive experiments are conducted on several commonly used benchmark datasets. The experimental results show that our framework achieves new state-of-the-art comparing with previous self-supervised MTS representation learning methods (up to +7.70\% improvement on LSST dataset) and can be well applied to the downstream MTS classification.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源