Registration-guided deep learning image segmentation for cone beam CT–based online adaptive radiotherapy

Lin Ma, Weicheng Chi, Howard E. Morgan, Mu Han Lin, Mingli Chen, David Sher, Dominic Moon, Dat T. Vo, Vladimir Avkshtol, Weiguo Lu, Xuejun Gu

Research output: Contribution to journalArticlepeer-review

2 Scopus citations


Purpose: Adaptive radiotherapy (ART), especially online ART, effectively accounts for positioning errors and anatomical changes. One key component of online ART process is accurately and efficiently delineating organs at risk (OARs) and targets on online images, such as cone beam computed tomography (CBCT). Direct application of deep learning (DL)-based segmentation to CBCT images suffered from issues such as low image quality and limited available contour labels for training. To overcome these obstacles to online CBCT segmentation, we propose a registration-guided DL (RgDL) segmentation framework that integrates image registration algorithms and DL segmentation models. Methods: The RgDL framework is composed of two components: image registration and RgDL segmentation. The image registration algorithm transforms/deforms planning contours that were subsequently used as guidance by the DL model to obtain accurate final segmentations. We had two implementations of the proposed framework—Rig-RgDL (Rig for rigid body) and Def-RgDL (Def for deformable)—with rigid body (RB) registration or deformable image registration (DIR) as the registration algorithm, respectively, and U-Net as the DL model architecture. The two implementations of RgDL framework were trained and evaluated on seven OARs in an institutional clinical head-and-neck dataset. Results: Compared to the baseline approaches using the registration or the DL alone, RgDLs achieved more accurate segmentation, as measured by higher mean Dice similarity coefficients (DSCs) and other distance-based metrics. Rig-RgDL achieved a DSC of 84.5% on seven OARs on average, higher than RB or DL alone by 4.5% and 4.7%. The average DSC of Def-RgDL was 86.5%, higher than DIR or DL alone by 2.4% and 6.7%. The inference time required by the DL model component to generate final segmentations of seven OARs was less than 1 s in RgDL. By examining the contours from RgDLs and DL case by case, we found that RgDL was less susceptible to image artifacts. We also studied how the performances of RgDL and DL vary with the size of the training dataset. The DSC of DL dropped by 12.1% as the number of training data decreased from 22 to 5, whereas RgDL only dropped by 3.4%. Conclusion: By incorporating the patient-specific registration guidance to a population-based DL segmentation model, RgDL framework overcame the obstacles associated with online CBCT segmentation, including low image quality and insufficient training data, and achieved better segmentation accuracy than baseline methods. The resulting segmentation accuracy and efficiency show promise for applying this RgDL framework for online ART.

Original languageEnglish (US)
Pages (from-to)5304-5316
Number of pages13
JournalMedical physics
Issue number8
StatePublished - Aug 2022


  • image registration
  • image segmentation
  • online adaptive radiotherapy

ASJC Scopus subject areas

  • Biophysics
  • Radiology Nuclear Medicine and imaging


Dive into the research topics of 'Registration-guided deep learning image segmentation for cone beam CT–based online adaptive radiotherapy'. Together they form a unique fingerprint.

Cite this