TY - GEN
T1 - Combining generative and discriminative models for semantic segmentation of CT scans via active learning
AU - Iglesias, Juan Eugenio
AU - Konukoglu, Ender
AU - Montillo, Albert
AU - Tu, Zhuowen
AU - Criminisi, Antonio
PY - 2011
Y1 - 2011
N2 - This paper presents a new supervised learning framework for the efficient recognition and segmentation of anatomical structures in 3D computed tomography (CT), with as little training data as possible. Training supervised classifiers to recognize organs within CT scans requires a large number of manually delineated exemplar 3D images, which are very expensive to obtain. In this study, we borrow ideas from the field of active learning to optimally select a minimum subset of such images that yields accurate anatomy segmentation. The main contribution of this work is in designing a combined generative- discriminative model which: i) drives optimal selection of training data; and ii) increases segmentation accuracy. The optimal training set is constructed by finding unlabeled scans which maximize the disagreement between our two complementary probabilistic models, as measured by a modified version of the Jensen-Shannon divergence. Our algorithm is assessed on a database of 196 labeled clinical CT scans with high variability in resolution, anatomy, pathologies, etc. Quantitative evaluation shows that, compared with randomly selecting the scans to annotate, our method decreases the number of training images by up to 45%. Moreover, our generative model of body shape substantially increases segmentation accuracy when compared to either using the discriminative model alone or a generic smoothness prior (e.g. via a Markov Random Field).
AB - This paper presents a new supervised learning framework for the efficient recognition and segmentation of anatomical structures in 3D computed tomography (CT), with as little training data as possible. Training supervised classifiers to recognize organs within CT scans requires a large number of manually delineated exemplar 3D images, which are very expensive to obtain. In this study, we borrow ideas from the field of active learning to optimally select a minimum subset of such images that yields accurate anatomy segmentation. The main contribution of this work is in designing a combined generative- discriminative model which: i) drives optimal selection of training data; and ii) increases segmentation accuracy. The optimal training set is constructed by finding unlabeled scans which maximize the disagreement between our two complementary probabilistic models, as measured by a modified version of the Jensen-Shannon divergence. Our algorithm is assessed on a database of 196 labeled clinical CT scans with high variability in resolution, anatomy, pathologies, etc. Quantitative evaluation shows that, compared with randomly selecting the scans to annotate, our method decreases the number of training images by up to 45%. Moreover, our generative model of body shape substantially increases segmentation accuracy when compared to either using the discriminative model alone or a generic smoothness prior (e.g. via a Markov Random Field).
UR - http://www.scopus.com/inward/record.url?scp=80052312929&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=80052312929&partnerID=8YFLogxK
U2 - 10.1007/978-3-642-22092-0_3
DO - 10.1007/978-3-642-22092-0_3
M3 - Conference contribution
C2 - 21761643
AN - SCOPUS:80052312929
SN - 9783642220913
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 25
EP - 36
BT - Information Processing in Medical Imaging - 22nd International Conference, IPMI 2011, Proceedings
T2 - 22nd International Conference on Information Processing in Medical Imaging, IPMI 2011
Y2 - 3 July 2011 through 8 July 2011
ER -