论文标题

放大事先:一种用于学习乳腺癌组织病理学图像的自我监督方法

Magnification Prior: A Self-Supervised Method for Learning Representations on Breast Cancer Histopathological Images

论文作者

Chhipa, Prakash Chandra, Upadhyay, Richa, Pihlgren, Gustav Grund, Saini, Rajkumar, Uchida, Seiichi, Liwicki, Marcus

论文摘要

这项工作提出了一种新型的自我监督的预训练方法,以学习有效的表示,而没有在组织病理学医学图像上使用放大倍率的因素进行标签。其他最先进的作品主要集中在严重依赖人类注释的完全监督的学习方法上。但是,在组织病理学中,标记和未标记的数据的稀缺是一个长期的挑战。当前,没有标签的表示学习对于组织病理学领域仍未探索。提出的方法是放大事先的对比相似性(MPC),可以通过利用放大倍率,电感转移和减少人类先验的宽度乳腺癌数据集中的无标签来进行自我监督的表示形式学习。当仅20%的标签用于微调和胜过全面监督的学习环境中,该方法与恶性分类中的最新学习相匹配。它提出了一个假设,并提供了经验证据来支持减少人类优先的人,从而导致自学人员中有效的表示。这项工作的实施可在github-https://github.com/prakashchhipa/magnification-prior-self-supervised-method上在线获得

This work presents a novel self-supervised pre-training method to learn efficient representations without labels on histopathology medical images utilizing magnification factors. Other state-of-theart works mainly focus on fully supervised learning approaches that rely heavily on human annotations. However, the scarcity of labeled and unlabeled data is a long-standing challenge in histopathology. Currently, representation learning without labels remains unexplored for the histopathology domain. The proposed method, Magnification Prior Contrastive Similarity (MPCS), enables self-supervised learning of representations without labels on small-scale breast cancer dataset BreakHis by exploiting magnification factor, inductive transfer, and reducing human prior. The proposed method matches fully supervised learning state-of-the-art performance in malignancy classification when only 20% of labels are used in fine-tuning and outperform previous works in fully supervised learning settings. It formulates a hypothesis and provides empirical evidence to support that reducing human-prior leads to efficient representation learning in self-supervision. The implementation of this work is available online on GitHub - https://github.com/prakashchhipa/Magnification-Prior-Self-Supervised-Method

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源