Automatic liver tumor localization using deep learning-based liver boundary motion estimation and biomechanical modeling (DL-Bio)

Hua Chieh Shao, Xiaokun Huang, Michael R. Folkert, Jing Wang, You Zhang

Research output: Contribution to journalArticlepeer-review

3 Scopus citations


Purpose: Recently, two-dimensional-to-three-dimensional (2D-3D) deformable registration has been applied to deform liver tumor contours from prior reference images onto estimated cone-beam computed tomography (CBCT) target images to automate on-board tumor localizations. Biomechanical modeling has also been introduced to fine-tune the intra-liver deformation-vector-fields (DVFs) solved by 2D-3D deformable registration, especially at low-contrast regions, using tissue elasticity information and liver boundary DVFs. However, the caudal liver boundary shows low contrast from surrounding tissues in the cone-beam projections, which degrades the accuracy of the intensity-based 2D-3D deformable registration there and results in less accurate boundary conditions for biomechanical modeling. We developed a deep-learning (DL)-based method to optimize the liver boundary DVFs after 2D-3D deformable registration to further improve the accuracy of subsequent biomechanical modeling and liver tumor localization. Methods: The DL-based network was built based on the U-Net architecture. The network was trained in a supervised fashion to learn motion correlation between cranial and caudal liver boundaries to optimize the liver boundary DVFs. Inputs of the network had three channels, and each channel featured the 3D DVFs estimated by the 2D-3D deformable registration along one Cartesian direction (x, y, z). To incorporate patient-specific liver boundary information into the DVFs, the DVFs were masked by a liver boundary ring structure generated from the liver contour of the prior reference image. The network outputs were the optimized DVFs along the liver boundary with higher accuracy. From these optimized DVFs, boundary conditions were extracted for biomechanical modeling to further optimize the solution of intra-liver tumor motion. We evaluated the method using 34 liver cancer patient cases, with 24 for training and 10 for testing. We evaluated and compared the performance of three methods: 2D-3D deformable registration, 2D-3D-Bio (2D-3D deformable registration with biomechanical modeling), and DL-Bio (DL model prediction with biomechanical modeling). The tumor localization errors were quantified through calculating the center-of-mass-errors (COMEs), DICE coefficients, and Hausdorff distance between deformed liver tumor contours and manually segmented “gold-standard” contours. Results: The predicted DVFs by the DL model showed improved accuracy at the liver boundary, which translated into more accurate liver tumor localizations through biomechanical modeling. On a total of 90 evaluated images and tumor contours, the average (± sd) liver tumor COMEs of the 2D-3D, 2D-3D-Bio, and DL-Bio techniques were 4.7 ± 1.9 mm, 2.9 ± 1.0 mm, and 1.7 ± 0.4 mm. The corresponding average (± sd) DICE coefficients were 0.60 ± 0.12, 0.71 ± 0.07, and 0.78 ± 0.03; and the average (± sd) Hausdorff distances were 7.0 ± 2.6 mm, 5.4 ± 1.5 mm, and 4.5 ± 1.3 mm, respectively. Conclusion: DL-Bio solves a general correlation model to improve the accuracy of the DVFs at the liver boundary. With improved boundary conditions, the accuracy of biomechanical modeling can be further increased for accurate intra-liver low-contrast tumor localization.

Original languageEnglish (US)
Pages (from-to)7790-7805
Number of pages16
JournalMedical physics
Issue number12
StatePublished - Dec 2021


  • CBCT
  • biomechanical modeling
  • convolutional neural network
  • deep learning
  • deformable registration
  • liver

ASJC Scopus subject areas

  • Biophysics
  • Radiology Nuclear Medicine and imaging


Dive into the research topics of 'Automatic liver tumor localization using deep learning-based liver boundary motion estimation and biomechanical modeling (DL-Bio)'. Together they form a unique fingerprint.

Cite this