论文标题
FEDSPLIT:基于非负关节基质分解和知识蒸馏的单发联盟建议系统
FedSPLIT: One-Shot Federated Recommendation System Based on Non-negative Joint Matrix Factorization and Knowledge Distillation
论文作者
论文摘要
具有缺失值完成的非负矩阵分解(NMF)是一种众所周知的有效协作过滤(CF)方法,用于提供个性化的用户建议。但是,传统的CF依赖于用户明确和隐性反馈的隐私侵入性收集来构建中央建议模型。最近,联邦学习的一击是作为减轻隐私问题的一种方法,同时解决了联邦学习的传统沟通瓶颈。在本文中,我们介绍了基于NMF联合分解的第一个无监督的一击Federated CF实施,称为FedSplit。在我们的解决方案中,客户首先将本地CF不合时间应用于构建独特的特定客户端建议者。然后,与处理器共享每个客户的本地项目模式和偏见,以执行联合分解以提取全局项目模式。然后将提取的模式聚合到每个客户端,以通过知识蒸馏构建本地模型。在我们的实验中,我们通过标准建议数据集证明了方法的可行性。 FedSplit可以获得比最新技术的状态(在某些情况下甚至胜过它的状态),并且通信数量大大减少。
Non-negative matrix factorization (NMF) with missing-value completion is a well-known effective Collaborative Filtering (CF) method used to provide personalized user recommendations. However, traditional CF relies on the privacy-invasive collection of users' explicit and implicit feedback to build a central recommender model. One-shot federated learning has recently emerged as a method to mitigate the privacy problem while addressing the traditional communication bottleneck of federated learning. In this paper, we present the first unsupervised one-shot federated CF implementation, named FedSPLIT, based on NMF joint factorization. In our solution, the clients first apply local CF in-parallel to build distinct client-specific recommenders. Then, the privacy-preserving local item patterns and biases from each client are shared with the processor to perform joint factorization in order to extract the global item patterns. Extracted patterns are then aggregated to each client to build the local models via knowledge distillation. In our experiments, we demonstrate the feasibility of our approach with standard recommendation datasets. FedSPLIT can obtain similar results than the state of the art (and even outperform it in certain situations) with a substantial decrease in the number of communications.