论文标题
通过视觉变压器进行分裂学习的差异私有cutmix
Differentially Private CutMix for Split Learning with Vision Transformer
论文作者
论文摘要
最近,Vision Transformer(VIT)已开始在计算机视觉任务中超过常规CNN。考虑使用VIT进行保护的分布式学习,联合学习(FL)会通信模型,由于Vit的较大模型大小和计算成本,该模型变得不适合。 Split学习(SL)通过在剪裁器上传达粉碎的数据来绕行此操作,但由于VIT的损坏数据和输入数据之间的高相似性而遭受了数据隐私泄漏和巨大的通信成本。在此问题的激励下,我们提出了DP-CutmixsL,这是一种差异私有(DP)SL框架,通过开发DP补丁级随机cutmix(DP-Cutmix),这是一种新型的隐私性互插入方案,替代了在被打破的数据中替代随机选择的贴片。通过实验,我们表明DP-Cutmixsl不仅可以提高隐私和沟通效率,而且还可以提高其准确性,而不是其香草SL的准确性。从理论上讲,我们分析了DP-CUTMIX放大RényiDP(RDP),该放大由其香草混合物的对应物上限。
Recently, vision transformer (ViT) has started to outpace the conventional CNN in computer vision tasks. Considering privacy-preserving distributed learning with ViT, federated learning (FL) communicates models, which becomes ill-suited due to ViT' s large model size and computing costs. Split learning (SL) detours this by communicating smashed data at a cut-layer, yet suffers from data privacy leakage and large communication costs caused by high similarity between ViT' s smashed data and input data. Motivated by this problem, we propose DP-CutMixSL, a differentially private (DP) SL framework by developing DP patch-level randomized CutMix (DP-CutMix), a novel privacy-preserving inter-client interpolation scheme that replaces randomly selected patches in smashed data. By experiment, we show that DP-CutMixSL not only boosts privacy guarantees and communication efficiency, but also achieves higher accuracy than its Vanilla SL counterpart. Theoretically, we analyze that DP-CutMix amplifies Rényi DP (RDP), which is upper-bounded by its Vanilla Mixup counterpart.