论文标题
恶性分类器:评估边缘计算中的推理时间数据重建风险
Vicious Classifiers: Assessing Inference-time Data Reconstruction Risk in Edge Computing
论文作者
论文摘要
边缘计算范式中的隐私推论鼓励机器学习服务的用户在其私人输入上本地运行模型,并且仅与服务器共享目标任务的模型输出。我们研究了一种恶性服务器如何通过仅观察模型输出来重建输入数据,同时通过共同训练目标模型(以用户方面运行)和数据重建的攻击模型(在服务器方面使用),通过共同训练目标模型(在用户方面运行)。我们提出了一种评估推理时间重建风险的新措施。在六个基准数据集上的评估显示,该模型的输入可以大约从单个推理的输出中重建。我们提出了一种主要的防御机制,以在推理时区分恶性与诚实的分类器。通过研究与新兴ML服务相关的这种风险,我们的工作对增强边缘计算中的隐私有影响。我们讨论了未来研究的开放挑战和方向,并在https://github.com/mmmealekzadeh/vicious-cllassifiers上发布我们的代码作为社区的基准。
Privacy-preserving inference in edge computing paradigms encourages the users of machine-learning services to locally run a model on their private input and only share the models outputs for a target task with the server. We study how a vicious server can reconstruct the input data by observing only the models outputs while keeping the target accuracy very close to that of a honest server by jointly training a target model (to run at users' side) and an attack model for data reconstruction (to secretly use at servers' side). We present a new measure to assess the inference-time reconstruction risk. Evaluations on six benchmark datasets show the model's input can be approximately reconstructed from the outputs of a single inference. We propose a primary defense mechanism to distinguish vicious versus honest classifiers at inference time. By studying such a risk associated with emerging ML services our work has implications for enhancing privacy in edge computing. We discuss open challenges and directions for future studies and release our code as a benchmark for the community at https://github.com/mmalekzadeh/vicious-classifiers .