Ultrasound Volume Reconstruction From Freehand Scans Without Tracking

Hengtao Guo, Hanqing Chao, Sheng Xu, Bradford J. Wood, Jing Wang, Pingkun Yan

Research output: Contribution to journalArticlepeer-review


Transrectal ultrasound is commonly used for guiding prostate cancer biopsy, where 3D ultrasound volume reconstruction is often desired. Current methods for 3D reconstruction from freehand ultrasound scans require external tracking devices to provide spatial information of an ultrasound transducer. This paper presents a novel deep learning approach for sensorless ultrasound volume reconstruction, which efficiently exploits content correspondence between ultrasound frames to reconstruct 3D volumes without external tracking. The underlying deep learning model, deep contextual-contrastive network (DC2-Net), utilizes self-attention to focus on the speckle-rich areas to estimate spatial movement and then minimizes a margin ranking loss for contrastive feature learning. A case-wise correlation loss over the entire input video helps further smooth the estimated trajectory. We train and validate DC2-Net on two independent datasets, one containing 619 transrectal scans and the other having 100 transperineal scans. Our proposed approach attained superior performance compared with other methods, with a drift rate of 9.64% and a prostate Dice of 0.89. The promising results demonstrate the capability of deep neural networks for universal ultrasound volume reconstruction from freehand 2D ultrasound scans without tracking information.

Original languageEnglish (US)
Pages (from-to)970-979
Number of pages10
JournalIEEE Transactions on Biomedical Engineering
Issue number3
StatePublished - Mar 1 2023


  • Contrastive learning
  • deep learning
  • self-attention
  • ultrasound imaging
  • volume reconstruction

ASJC Scopus subject areas

  • Biomedical Engineering


Dive into the research topics of 'Ultrasound Volume Reconstruction From Freehand Scans Without Tracking'. Together they form a unique fingerprint.

Cite this