论文标题

使用局部神经转化检测时间序列中的异常

Detecting Anomalies within Time Series using Local Neural Transformations

论文作者

Schneider, Tim, Qiu, Chen, Kloft, Marius, Latif, Decky Aspandi, Staab, Steffen, Mandt, Stephan, Rudolph, Maja

论文摘要

我们开发了一种新方法来检测时间序列中的异常情况,这在许多应用领域至关重要,从自动驾驶汽车,金融和营销到医学诊断和流行病学。该方法基于自我监督的深度学习,在促进图像上的深度异常检测方面起着关键作用,在该图像上有强大的图像转换。但是,对于时间序列而言,这种转换是不可用的。解决此问题时,我们开发了局部神经转化(LNT),这是一种从数据中学习时间序列的局部变换的方法。该方法对每个时间步骤产生异常得分,因此可以用于检测时间序列中的异常。我们在理论分析中证明,与以前的深度异常检测(AD)方法相比,我们的新型训练目标更适合转型学习。我们的实验表明,与以前的工作相比,LNT可以从LibrisPeech数据集中找到语音段中的异常,并且更好地检测到网络物理系统的中断。学习转换的可视化可以洞悉LNT学习的转换类型。

We develop a new method to detect anomalies within time series, which is essential in many application domains, reaching from self-driving cars, finance, and marketing to medical diagnosis and epidemiology. The method is based on self-supervised deep learning that has played a key role in facilitating deep anomaly detection on images, where powerful image transformations are available. However, such transformations are widely unavailable for time series. Addressing this, we develop Local Neural Transformations(LNT), a method learning local transformations of time series from data. The method produces an anomaly score for each time step and thus can be used to detect anomalies within time series. We prove in a theoretical analysis that our novel training objective is more suitable for transformation learning than previous deep Anomaly detection(AD) methods. Our experiments demonstrate that LNT can find anomalies in speech segments from the LibriSpeech data set and better detect interruptions to cyber-physical systems than previous work. Visualization of the learned transformations gives insight into the type of transformations that LNT learns.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源