论文标题
Stainnet:快速且稳健的染色归一化网络
StainNet: a fast and robust stain normalization network
论文作者
论文摘要
染色归一化通常是指将源图像的颜色分布转移到目标图像的颜色分布,并已广泛用于生物医学图像分析。常规的染色归一化被认为是构建像素颜色映射模型的构造,该模型仅取决于一个参考图像,并且无法准确实现图像数据集之间的样式转换。原则上,由于其复杂的网络结构,基于深度学习的方法可以很好地解决这种样式转换,而其复杂的结构导致了计算效率低下和样式转换的伪像,这限制了实际应用。在这里,我们使用蒸馏学习来降低深度学习方法的复杂性和一个名为Stainnet的快速,强大的网络,以学习源图像和目标图像之间的颜色映射。 Stainnet可以从整个数据集中学习颜色映射关系,并以像素到像素的方式调整颜色值。像素到像素的方式限制了网络大小,并避免了样式转换的工件。细胞病理学和组织病理学数据集的结果表明,Stainnet可以实现与基于深度学习的方法相当的性能。计算结果表明,Stainnet的速度比Staingan快40倍以上,并且可以在40秒内标准化100,000x100,000的全滑动图像。
Stain normalization often refers to transferring the color distribution of the source image to that of the target image and has been widely used in biomedical image analysis. The conventional stain normalization is regarded as constructing a pixel-by-pixel color mapping model, which only depends on one reference image, and can not accurately achieve the style transformation between image datasets. In principle, this style transformation can be well solved by the deep learning-based methods due to its complicated network structure, whereas, its complicated structure results in the low computational efficiency and artifacts in the style transformation, which has restricted the practical application. Here, we use distillation learning to reduce the complexity of deep learning methods and a fast and robust network called StainNet to learn the color mapping between the source image and target image. StainNet can learn the color mapping relationship from a whole dataset and adjust the color value in a pixel-to-pixel manner. The pixel-to-pixel manner restricts the network size and avoids artifacts in the style transformation. The results on the cytopathology and histopathology datasets show that StainNet can achieve comparable performance to the deep learning-based methods. Computation results demonstrate StainNet is more than 40 times faster than StainGAN and can normalize a 100,000x100,000 whole slide image in 40 seconds.