Enhancing Transferability of Features from Pretrained Deep Neural Networks for Lung Nodule Classification

Hongming Shan, Ge Wang, Mannudeep K. Kalra, Rodrigo Canellas de Souza, Junping Zhang


Published in:Fully3D 2017 Proceedings


deep learning, lung nodule classification, finetuning technique, feature selection
Among most popular feature extractors, pretrained deep neural networks play a central role in transfer learning to extract high-level feature on small datasets. The transferable performance, however, cannot be guaranteed for the task of interest. To enhance the transferability, this paper employs fine-tuning and feature selection in a different way to improve the accuracy of lung nodule classification. The fine-tuning technique retrains the neural network using lung nodule dataset, while feature selection captures a useful subset of features for lung nodule classification. Preliminary experimental results on CT images from Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) confirm that the classification accuracy on lung nodule can be significantly improved via finetuning and feature selection. Furthermore, the results outperform competitively handcrafted texture descriptors.
Hongming Shan
Fudan University
Ge Wang
Biomedical Imaging Center, Rensselaer Polytechnic Institute, USA
Mannudeep K. Kalra
Massachusetts General Hospital
Rodrigo Canellas de Souza
Massachusetts General Hospital
Junping Zhang
Fudan University
  1. N. L. S. T. R. Team et al., “Reduced lung-cancer mortality with lowdose computed tomographic screening,” The New England Journal of  Medicine, vol. 2011, no. 365, pp. 395–409, 2011.
  2. H. J. Aerts, E. R. Velazquez, R. T. Leijenaar, C. Parmar, P. Grossmann, S. Carvalho, J. Bussink, R. Monshouwer, B. Haibe-Kains, D. Rietveld et al., “Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach,” Nature Communications, vol. 5, 2014.
  3. F. Han, G. Zhang, H. Wang, B. Song, H. Lu, D. Zhao, H. Zhao, and Z. Liang, “A texture feature analysis for diagnosis of pulmonary nodules using LIDC-IDRI database,” in International Conference on Medical Imaging Physics and Engineering. IEEE, 2013, pp. 14–18.
  4. W. Shen, M. Zhou, F. Yang, C. Yang, and J. Tian, “Multi-scale convolutional neural networks for lung nodule classification,” in International Conference on Information Processing in Medical Imaging. Springer, 2015, pp. 588–599.
  5. D. Kumar, A. Wong, and D. A. Clausi, “Lung nodule classification using deep features in CT images,” in 12th Conference on Computer and Robot Vision. IEEE, 2015, pp. 133–138.
  6. T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971–987, 2002.
  7. N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1. IEEE, 2005, pp. 886–893.
  8. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems, 2012, pp. 1097–1105.
  9. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1–9.
  10. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
  11. J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in deep neural networks?” in Advances in Neural Information Processing Systems, 2014, pp. 3320–3328.
  12. S. G. Armato III, G. McLennan, L. Bidaut, M. F. McNitt-Gray, C. R. , A. P. Reeves, B. Zhao, D. R. Aberle, C. I. Henschke, E. A. Hoffman et al., “The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans,” Medical physics, vol. 38, no. 2, pp. 915–931, 2011.
  13. A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “CNN features off-the-shelf: An astounding baseline for recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2014, pp. 806–813.
  14. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma,Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., “ImageNet large scale visual recognition challenge,” International Journal of Computer Vision, vol. 115, no. 3, pp. 211–252, 2015.
  15. L. Breiman, “Random forests,” Machine Learning, vol. 45, no. 1, pp. 5–32, 2001.
  16. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg et al., “Scikit-learn: Machine learning in python,” Journal of Machine Learning Research, vol. 12, no. Oct, pp. 2825–2830, 2011.
  17. S. Van der Walt, J. L. Schonberger, J. Nunez-Iglesias, F. Boulogne, ¨J. D. Warner, N. Yager, E. Gouillart, and T. Yu, “Scikit-image: Image processing in python,” PeerJ, vol. 2, p. e453, 2014.