论文标题

公平的元学习,用于几次分类

Fair Meta-Learning For Few-Shot Classification

论文作者

Zhao, Chen, Li, Changbin, Li, Jincheng, Chen, Feng

论文摘要

如今,人工智能在我们的生活中起着越来越重要的作用,因为曾经由人类做出的决定已被委派为自动化系统。但是,一种基于偏见数据训练的机器学习算法往往会做出不公平的预测。因此,开发有关数据保护属性公平的分类算法成为一个重要问题。由于围绕共享和少量机器学习工具的公平效果(例如模型不可知的元学习框架)的关注,我们提出了一种新颖的快速适应性的几个快速适应性的几声元学习方法,从而有效地降低了在元边界共振方面,从而在保护范围和范围内的范围内,从而可以在元边界范围内降低偏差,从而可以从保护范围内进行范围内的范围,从而从保护范围内进行了范围的范围。通过对三个最先进的元学习算法的两个现实世界图像基准进行的广泛实验,我们从经验上证明,我们提出的方法有效地减轻了模型输出的偏见,并概括了准确性和公平性,可以通过有限的训练样本来毫无看不见的任务。

Artificial intelligence nowadays plays an increasingly prominent role in our life since decisions that were once made by humans are now delegated to automated systems. A machine learning algorithm trained based on biased data, however, tends to make unfair predictions. Developing classification algorithms that are fair with respect to protected attributes of the data thus becomes an important problem. Motivated by concerns surrounding the fairness effects of sharing and few-shot machine learning tools, such as the Model Agnostic Meta-Learning framework, we propose a novel fair fast-adapted few-shot meta-learning approach that efficiently mitigates biases during meta-train by ensuring controlling the decision boundary covariance that between the protected variable and the signed distance from the feature vectors to the decision boundary. Through extensive experiments on two real-world image benchmarks over three state-of-the-art meta-learning algorithms, we empirically demonstrate that our proposed approach efficiently mitigates biases on model output and generalizes both accuracy and fairness to unseen tasks with a limited amount of training samples.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源