TY - JOUR
T1 - Deep learning-based medical image segmentation with limited labels
AU - Chi, Weicheng
AU - Ma, Lin
AU - Wu, Junjie
AU - Chen, Mingli
AU - Lu, Weiguo
AU - Gu, Xuejun
N1 - Funding Information:
This work was partially supported by NIH R01 CA218402 and R01 CA235723. The authors acknowledge Dr Haibin Chen for assisting on the REOS algorithm/code, and Dr Mingkui Tan for general comments on the manuscript.
Publisher Copyright:
© 2020 Institute of Physics and Engineering in Medicine
PY - 2020/11/25
Y1 - 2020/11/25
N2 - Deep learning (DL)-based auto-segmentation has the potential for accurate organ delineation in radiotherapy applications but requires large amounts of clean labeled data to train a robust model. However, annotating medical images is extremely time-consuming and requires clinical expertise, especially for segmentation that demands voxel-wise labels. On the other hand, medical images without annotations are abundant and highly accessible. To alleviate the influence of the limited number of clean labels, we propose a weakly supervised DL training approach using deformable image registration (DIR)-based annotations, leveraging the abundance of unlabeled data. We generate pseudo-contours by utilizing DIR to propagate atlas contours onto abundant unlabeled images and train a robust DL-based segmentation model. With 10 labeled TCIA dataset and 50 unlabeled CT scans from our institution, our model achieved Dice similarity coefficient of 87.9%, 73.4%, 73.4%, 63.2% and 61.0% on mandible, left & right parotid glands and left & right submandibular glands of TCIA test set and competitive performance on our institutional clinical dataset and a third party (PDDCA) dataset. Experimental results demonstrated the proposed method outperformed traditional multi-atlas DIR methods and fully supervised limited data training and is promising for DL-based medical image segmentation application with limited annotated data.
AB - Deep learning (DL)-based auto-segmentation has the potential for accurate organ delineation in radiotherapy applications but requires large amounts of clean labeled data to train a robust model. However, annotating medical images is extremely time-consuming and requires clinical expertise, especially for segmentation that demands voxel-wise labels. On the other hand, medical images without annotations are abundant and highly accessible. To alleviate the influence of the limited number of clean labels, we propose a weakly supervised DL training approach using deformable image registration (DIR)-based annotations, leveraging the abundance of unlabeled data. We generate pseudo-contours by utilizing DIR to propagate atlas contours onto abundant unlabeled images and train a robust DL-based segmentation model. With 10 labeled TCIA dataset and 50 unlabeled CT scans from our institution, our model achieved Dice similarity coefficient of 87.9%, 73.4%, 73.4%, 63.2% and 61.0% on mandible, left & right parotid glands and left & right submandibular glands of TCIA test set and competitive performance on our institutional clinical dataset and a third party (PDDCA) dataset. Experimental results demonstrated the proposed method outperformed traditional multi-atlas DIR methods and fully supervised limited data training and is promising for DL-based medical image segmentation application with limited annotated data.
KW - Deep learning
KW - Deformable image registration
KW - Limited labels
KW - Segmentation
UR - http://www.scopus.com/inward/record.url?scp=85099231425&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85099231425&partnerID=8YFLogxK
U2 - 10.1088/1361-6560/abc363
DO - 10.1088/1361-6560/abc363
M3 - Article
C2 - 33086205
AN - SCOPUS:85099231425
SN - 0031-9155
VL - 65
JO - Physics in Medicine and Biology
JF - Physics in Medicine and Biology
IS - 23
M1 - 235001
ER -