论文标题
适应三胞胎的重要性对个性化建议的隐性反馈
Adapting Triplet Importance of Implicit Feedback for Personalized Recommendation
论文作者
论文摘要
隐式反馈经常用于开发个性化的推荐服务,因为其无处不在和现实系统中的可访问性。为了有效地利用此类信息,大多数研究都采用成对排名方法对构建的培训三胞胎(用户,正面项目,负项目),旨在区分每个用户的正面项目和负面项目。但是,这些方法中的大多数都平均处理所有训练三胞胎,这忽略了不同的正或负项目之间的微妙差异。另一方面,即使其他一些作品利用了用户行为的辅助信息(例如,停留时间)来捕获这种微妙的差异,但很难获得这样的辅助信息。为了减轻上述问题,我们提出了一个名为Triplet重要性学习(TIL)的新型培训框架,该框架可以自适应地学习训练三胞胎的重要性得分。我们为重要性得分生成的两种策略设计了两种策略,并将整个过程作为双重优化,这不需要任何基于规则的设计。我们将提出的训练程序与基于几个矩阵分解(MF)和基于图形神经网络(GNN)的建议模型进行了整合,证明了我们的框架的兼容性。通过使用与许多最先进方法的三个现实世界数据集进行比较,我们表明我们所提出的方法在Top-K推荐方面,在Recement@K方面优于3-21 \%的最佳现有模型。
Implicit feedback is frequently used for developing personalized recommendation services due to its ubiquity and accessibility in real-world systems. In order to effectively utilize such information, most research adopts the pairwise ranking method on constructed training triplets (user, positive item, negative item) and aims to distinguish between positive items and negative items for each user. However, most of these methods treat all the training triplets equally, which ignores the subtle difference between different positive or negative items. On the other hand, even though some other works make use of the auxiliary information (e.g., dwell time) of user behaviors to capture this subtle difference, such auxiliary information is hard to obtain. To mitigate the aforementioned problems, we propose a novel training framework named Triplet Importance Learning (TIL), which adaptively learns the importance score of training triplets. We devise two strategies for the importance score generation and formulate the whole procedure as a bilevel optimization, which does not require any rule-based design. We integrate the proposed training procedure with several Matrix Factorization (MF)- and Graph Neural Network (GNN)-based recommendation models, demonstrating the compatibility of our framework. Via a comparison using three real-world datasets with many state-of-the-art methods, we show that our proposed method outperforms the best existing models by 3-21\% in terms of Recall@k for the top-k recommendation.