论文标题

精神健康的机器学习模型中的偏见发现

Bias Discovery in Machine Learning Models for Mental Health

论文作者

Mosteiro, Pablo, Kuiper, Jesse, Masthoff, Judith, Scheepers, Floortje, Spruit, Marco

论文摘要

公平和偏见是人工智能中的关键概念,但在临床精神病学中的机器学习应用中相对忽略了它们。我们使用临床心理健康数据培训的模型计算公平指标,并提出缓解偏见的策略。我们收集了与大学医学中心精神病学系的入院,诊断和治疗有关的结构化数据。我们培训了一个机器学习模型,以根据过去数据来预测苯二氮卓类药物的未来管理。我们发现性别在预测中起意外的作用 - 这构成了偏见。使用AI公平360软件包,我们实施了将歧视和歧视意识正规化作为偏见策略,我们探索了它们对模型性能的影响。这是在基于实际临床精神病学数据培训的机器学习模型中的偏见探索和缓解措施的第一个应用。

Fairness and bias are crucial concepts in artificial intelligence, yet they are relatively ignored in machine learning applications in clinical psychiatry. We computed fairness metrics and present bias mitigation strategies using a model trained on clinical mental health data. We collected structured data related to the admission, diagnosis, and treatment of patients in the psychiatry department of the University Medical Center Utrecht. We trained a machine learning model to predict future administrations of benzodiazepines on the basis of past data. We found that gender plays an unexpected role in the predictions-this constitutes bias. Using the AI Fairness 360 package, we implemented reweighing and discrimination-aware regularization as bias mitigation strategies, and we explored their implications for model performance. This is the first application of bias exploration and mitigation in a machine learning model trained on real clinical psychiatry data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源