论文标题
通过最小化排名范围的总和来学习
Learning by Minimizing the Sum of Ranked Range
论文作者
论文摘要
在形成学习目标时,通常需要将一组单个值汇总到单个输出。这种情况发生在总损失中,该案例结合了每个培训样本中学习模型的个体损失,以及多标签学习的个体损失,这结合了所有类标签上的预测得分。在这项工作中,我们将排名范围(SORR)的总和作为构成学习目标的一般方法。排名范围是一组实数的分类值的连续序列。 SORR的最小化用凸算法(DCA)的差异解决。我们探索了SORR框架最小化的机器学习中的两个应用程序,即二进制分类的AORR总损失以及多标签/多类分类的TKML个体损失。我们的经验结果突出了提出的优化框架的有效性,并证明了使用合成和真实数据集的损失的适用性。
In forming learning objectives, one oftentimes needs to aggregate a set of individual values to a single output. Such cases occur in the aggregate loss, which combines individual losses of a learning model over each training sample, and in the individual loss for multi-label learning, which combines prediction scores over all class labels. In this work, we introduce the sum of ranked range (SoRR) as a general approach to form learning objectives. A ranked range is a consecutive sequence of sorted values of a set of real numbers. The minimization of SoRR is solved with the difference of convex algorithm (DCA). We explore two applications in machine learning of the minimization of the SoRR framework, namely the AoRR aggregate loss for binary classification and the TKML individual loss for multi-label/multi-class classification. Our empirical results highlight the effectiveness of the proposed optimization framework and demonstrate the applicability of proposed losses using synthetic and real datasets.