Compound Figure Separation of Biomedical Images: Mining Large Datasets for Self-supervised Learning

Tianyuan Yao1, Chang Qu1, Jun Long2, Quan Liu1, Ruining Deng1, Yuanhan Tian1, Jiachen Xu1, Aadarsh Jha1, Zuhayr Asad1, Shunxing Bao3, Mengyang Zhao4, Agnes Fogo5, Bennett Landman3, Haichun Yang5, Catie Chang3, Yuankai Huo3
1: Computer Science, Vanderbilt University, 2: Big Data Institute, Central South University, 3: Electrical and Computer Engineering, Vanderbilt University, 4: Dartmouth College, 5: Department of Pathology, Vanderbilt University Medical Center
Publication date: 2022/09/04
https://doi.org/10.59275/j.melba.2022-5aa9
PDF · Code · arXiv

Abstract

With the rapid development of self-supervised learning (e.g., contrastive learning), the importance of having large-scale images (even without annotations) for training a more generalizable AI model has been widely recognized in medical image analysis. However, collecting large-scale task-specific unannotated data at scale can be challenging for individual labs. Existing online resources, such as digital books, publications, and search engines, provide a new resource for obtaining large-scale images. However, published images in healthcare (e.g., radiology and pathology) consist of a considerable amount of compound figures with subplots. In order to extract and separate compound figures into usable individual images for downstream learning, we propose a simple compound figure separation (SimCFS) framework without using the traditionally required detection bounding box annotations, with a new loss function and a hard case simulation. Our technical contribution is four-fold: (1) we introduce a simulation-based training framework that minimizes the need for resource extensive bounding box annotations; (2) we propose a new side loss that is optimized for compound figure separation; (3) we propose an intra-class image augmentation method to simulate hard cases; and (4) to the best of our knowledge, this is the first study that evaluates the efficacy of leveraging self-supervised learning with compound image separation. From the results, the proposed SimCFS achieved state-of-the-art performance on the ImageCLEF 2016 Compound Figure Separation Database. The pretrained self-supervised learning model using large-scale mined figures improved the accuracy of downstream image classification tasks with a contrastive learning algorithm. The source code of SimCFS is made publicly available at https://github.com/hrlblab/ImageSeperation

Keywords

Compound figures · Biomedical data · Deep learning · Contrastive learning · Self-supervised learning

Bibtex @article{melba:2022:025:yao, title = "Compound Figure Separation of Biomedical Images: Mining Large Datasets for Self-supervised Learning", author = "Yao, Tianyuan and Qu, Chang and Long, Jun and Liu, Quan and Deng, Ruining and Tian, Yuanhan and Xu, Jiachen and Jha, Aadarsh and Asad, Zuhayr and Bao, Shunxing and Zhao, Mengyang and Fogo, Agnes and Landman, Bennett and Yang, Haichun and Chang, Catie and Huo, Yuankai", journal = "Machine Learning for Biomedical Imaging", volume = "1", issue = "August 2022 issue", year = "2022", pages = "1--19", issn = "2766-905X", doi = "https://doi.org/10.59275/j.melba.2022-5aa9", url = "https://melba-journal.org/2022:025" }
RISTY - JOUR AU - Yao, Tianyuan AU - Qu, Chang AU - Long, Jun AU - Liu, Quan AU - Deng, Ruining AU - Tian, Yuanhan AU - Xu, Jiachen AU - Jha, Aadarsh AU - Asad, Zuhayr AU - Bao, Shunxing AU - Zhao, Mengyang AU - Fogo, Agnes AU - Landman, Bennett AU - Yang, Haichun AU - Chang, Catie AU - Huo, Yuankai PY - 2022 TI - Compound Figure Separation of Biomedical Images: Mining Large Datasets for Self-supervised Learning T2 - Machine Learning for Biomedical Imaging VL - 1 IS - August 2022 issue SP - 1 EP - 19 SN - 2766-905X DO - https://doi.org/10.59275/j.melba.2022-5aa9 UR - https://melba-journal.org/2022:025 ER -

2022:025 cover