论文标题

审查Memristor计算系统的安全技术

Review of security techniques for memristor computing systems

论文作者

Zou, Minhui, Du, Nan, Kvatinsky, Shahar

论文摘要

神经网络(NN)算法已成为视觉对象识别,自然语言处理和机器人技术的主要工具。为了提高这些算法的计算效率,与传统的von Neuman计算体系结构相比,研究人员一直专注于Memristor Computing Systems。当今使用Memristor计算系统的一个主要缺点是,在人工智能(AI)时代,训练有素的NN模型是知识产权,并且在Memristor Computing Systems加载时,面对盗窃威胁,尤其是在边缘设备中运行时。对手可能会通过高级攻击(例如学习攻击和侧渠道分析)窃取训练有素的NN模型。在本文中,我们回顾了用于保护Memristor计算系统的不同安全技术。两个威胁模型是根据他们对对手功能的假设进行描述的:黑框(BB)模型和白色框(WB)模型。在这些威胁模型的背景下,我们将现有的安全技术分为五个类:挫败学习攻击(BB),挫败侧通道攻击(BB),NN模型加密(WB),NN重量转换(WB)和指纹嵌入(WB)。我们还提供了安全技术局限性的交叉比较。本文在设计安全的Memristor计算系统时可以作为帮助。

Neural network (NN) algorithms have become the dominant tool in visual object recognition, natural language processing, and robotics. To enhance the computational efficiency of these algorithms, in comparison to the traditional von Neuman computing architectures, researchers have been focusing on memristor computing systems. A major drawback when using memristor computing systems today is that, in the artificial intelligence (AI) era, well-trained NN models are intellectual property and, when loaded in the memristor computing systems, face theft threats, especially when running in edge devices. An adversary may steal the well-trained NN models through advanced attacks such as learning attacks and side-channel analysis. In this paper, we review different security techniques for protecting memristor computing systems. Two threat models are described based on their assumptions regarding the adversary's capabilities: a black-box (BB) model and a white-box (WB) model. We categorize the existing security techniques into five classes in the context of these threat models: thwarting learning attacks (BB), thwarting side-channel attacks (BB), NN model encryption (WB), NN weight transformation (WB), and fingerprint embedding (WB). We also present a cross-comparison of the limitations of the security techniques. This paper could serve as an aid when designing secure memristor computing systems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源