Background Despite recent advances in cancer immunotherapies, their efficacies vary significantly among patients. To better understand the mechanisms of drug resistance, it is essential to characterize immune responses to immunotherapies in the tumor immune microenvironment (TME) from intact patient tissues. To this end, quantitative spatial immune profiling of pathology images has been the focus for many recent studies. Such analysis often depends critically on the automated image segmentation of tumor and stromal compartments. However, current segmentation approaches, even these based on deep learning, often fail to perform well when given datasets to segment, which differ from the data on which they were trained. Specifically, tissue segmentation models trained for one type of organ (source) face challenges in performance when applied directly to images of another organ type (target), even when the targeted regions to segment are highly similar in morphology between the source and target. Here, we present a segmentation approach to adapt knowledge learned from source data of one cancer type to unlabeled target data of another organ cancer type via unsupervised domain adaptation (UDA) frameworks. This research will help build deep learning models that significantly reduce the need for expert manual annotations.
Methods Annotated colorectal cancer (CRC)1 (target domain) and prostate cancer (source domain)2 were used for tumor tissue segmentation model development, containing image tiles from 38 and 20 whole slide images, respectively. We compared the performance and robustness of four approaches. First, we implemented two output-space domain-adversarial based UDA’s. We then implemented a self-training-based approach. Additionally, we designed a two-stage UDA approach by first conducting self-training and then further aligning target domain features with category-anchors generated from source data after a first stage of self-training.
Results Directly applying a tumor tissue segmentation model trained on prostate cancer images (source) to CRC images (target) resulted in an intersection-over-union (IOU) score of 62.5%, which was 19% IOU lower (domain gap) than using a model trained on target data. Methods based on output-space domain adversarial training reduced the domain gap by up to 8% IOU, a performance result which was better than with self-training-based methods, which only reduced the domain gap by 4%. Both sets of approaches improved precision by 10%.
Conclusions We demonstrate the feasibility of designing tumor segmentation models that are robust and generalizable to multiple indications. The UDA approaches have the potential to speed our understanding of factors influencing immunotherapy efficacy through automated annotation of tissue regions required.
Graham S, Chen H, Gamper J, Dou Q, Heng PA, Snead D, Tsang YW, Rajpoot N: MILD-Net: minimal information loss dilated network for quantitativegland instance segmentation in colon histology images. Medical image analysis 2019;52:199–211.
Bulten W, Bándi P, Hoven J. et al. Epithelium segmentation using deep learning in H&E-stained prostate specimens with immunohistochemistry as reference standard. Sci Rep 2019;9, 864.
This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/licenses/by/4.0/.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.