Single patient learning for adaptive radiotherapy dose prediction

Austen Maniscalco, Xiao Liang, Mu Han Lin, Steve Jiang, Dan Nguyen

Research output: Contribution to journalArticlepeer-review

Abstract

Background: Throughout a patient's course of radiation therapy, maintaining accuracy of their initial treatment plan over time is challenging due to anatomical changes-for example, stemming from patient weight loss or tumor shrinkage. Online adaptation of their RT plan to these changes is crucial, but hindered by manual and time-consuming processes. While deep learning (DL) based solutions have shown promise in streamlining adaptive radiation therapy (ART) workflows, they often require large and extensive datasets to train population-based models. Purpose: This study extends our prior research by introducing a minimalist approach to patient-specific adaptive dose prediction. In contrast to our prior method, which involved fine-tuning a pre-trained population model, this new method trains a model from scratch using only a patient's initial treatment data. This patient-specific dose predictor aims to enhance clinical accessibility, thereby empowering physicians and treatment planners to make more informed, quantitative decisions in ART. We hypothesize that patient-specific DL models will provide more accurate adaptive dose predictions for their respective patients compared to a population-based DL model. Methods: We selected 33 patients to train an adaptive population-based (AP) model. Ten additional patients were selected, and their respective initial RT data served as single samples for training patient-specific (PS) models. These 10 patients contained an additional 26 ART plans that were withheld as the test dataset to evaluate AP versus PS model dose prediction performance. We assessed model performance using Mean Absolute Percent Error (MAPE) by comparing predicted doses to the originally delivered ground truth doses. We used the Wilcoxon signed-rank test to determine statistically significant differences in terms of MAPE between the AP and PS model results across the test dataset. Furthermore, we calculated differences between predicted and ground truth mean doses for segmented structures and determined statistical significance in the differences for each of them. Results: The average MAPE across AP and PS model dose predictions was 5.759% and 4.069%, respectively. The Wilcoxon signed-rank test yielded two-tailed p-value = (Formula presented.), indicating that the MAPE differences between the AP and PS model dose predictions are statistically significant, and 95% confidence interval = [−2.1610, −1.0130], indicating 95% confidence that the MAPE difference between the AP and PS models for a population lies in this range. Out of 24 total segmented structures, the comparison of mean dose differences for 12 structures indicated statistical significance with two-tailed p-values < 0.05. Conclusion: Our study demonstrates the potential of patient-specific deep learning models in application to ART. Notably, our method streamlines the training process by minimizing the size of the required training dataset, as only a single patient's initial treatment data is required. External institutions considering the implementation of such a technology could package such a model so that it only requires the upload of a reference treatment plan for model training and deployment. Our single patient learning strategy demonstrates promise in ART due to its minimal dataset requirement and its utility in personalization of cancer treatment.

Original languageEnglish (US)
Pages (from-to)7324-7337
Number of pages14
JournalMedical physics
Volume50
Issue number12
DOIs
StatePublished - Dec 2023

Keywords

  • adaptive
  • artificial intelligence
  • deep learning
  • dose prediction
  • head and neck cancer
  • patient
  • radiation therapy
  • single

ASJC Scopus subject areas

  • Biophysics
  • Radiology Nuclear Medicine and imaging

Fingerprint

Dive into the research topics of 'Single patient learning for adaptive radiotherapy dose prediction'. Together they form a unique fingerprint.

Cite this