Dense surface reconstruction using a learning-based monocular vSLAM model for laparoscopic surgery

James Yu, Kelden Pruitt, Nati Nawawithan, Brett A. Johnson, Jeffrey Gahan, Baowei Fei

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations

Abstract

Augmented reality (AR) has seen increased interest and attention for its application in surgical procedures. AR-guided surgical systems can overlay segmented anatomy from pre-operative imaging onto the user's environment to delineate hard-to-see structures and subsurface lesions intraoperatively. While previous works have utilized pre-operative imaging such as computed tomography or magnetic resonance images, registration methods still lack the ability to accurately register deformable anatomical structures without fiducial markers across modalities and dimensionalities. This is especially true of minimally invasive abdominal surgical techniques, which often employ a monocular laparoscope, due to inherent limitations. Surgical scene reconstruction is a critical component towards accurate registrations needed for AR-guided surgery and other downstream AR applications such as remote assistance or surgical simulation. In this work, we utilize a state-of-the-art (SOTA) deep-learning-based visual simultaneous localization and mapping (vSLAM) algorithm to generate a dense 3D reconstruction with camera pose estimations and depth maps from video obtained with a monocular laparoscope. The proposed method can robustly reconstruct surgical scenes using real-time data and provide camera pose estimations without stereo or additional sensors, which increases its usability and is less intrusive. We also demonstrate a framework to evaluate current vSLAM algorithms on non-Lambertian, low-texture surfaces and explore using its outputs on downstream tasks. We expect these evaluation methods can be utilized for the continual refinement of newer algorithms for AR-guided surgery.

Original languageEnglish (US)
Title of host publicationMedical Imaging 2024
Subtitle of host publicationImage-Guided Procedures, Robotic Interventions, and Modeling
EditorsJeffrey H. Siewerdsen, Maryam E. Rettmann
PublisherSPIE
ISBN (Electronic)9781510671607
DOIs
StatePublished - 2024
EventMedical Imaging 2024: Image-Guided Procedures, Robotic Interventions, and Modeling - San Diego, United States
Duration: Feb 19 2024Feb 22 2024

Publication series

NameProgress in Biomedical Optics and Imaging - Proceedings of SPIE
Volume12928
ISSN (Print)1605-7422

Conference

ConferenceMedical Imaging 2024: Image-Guided Procedures, Robotic Interventions, and Modeling
Country/TerritoryUnited States
CitySan Diego
Period2/19/242/22/24

Keywords

  • 3D reconstruction
  • Augmented reality
  • Deep learning
  • Laparoscopy
  • MRI
  • Neural networks
  • SLAM
  • image-guided surgery

ASJC Scopus subject areas

  • Electronic, Optical and Magnetic Materials
  • Atomic and Molecular Physics, and Optics
  • Biomaterials
  • Radiology Nuclear Medicine and imaging

Fingerprint

Dive into the research topics of 'Dense surface reconstruction using a learning-based monocular vSLAM model for laparoscopic surgery'. Together they form a unique fingerprint.

Cite this