CT Image Denoising with Perceptive Deep Neural Networks

Qingsong Yang, Ge Wang, Pingkun Yan, Mannudeep K. Kalra

DOI:10.12059/Fully3D.2017-11-3202015

Published in:Fully3D 2017 Proceedings

Pages:858-863

Keywords:
low dose CT, image denoising, deep learning, perceptual loss
Increasing use of CT in modern medical practice has raised concerns over associated radiation dose. Reduction of radiation dose associated with CT can increase noise and artifacts, which can adversely affect diagnostic confidence. Denoising of low-dose CT images on the other hand can help improve diagnostic confidence, which however is a challenging problem due to its ill-posed nature, since one noisy image patch may correspond to many different output patches. In the past decade, machine learning based approaches have made quite impressive progress in this direction. However, most of those methods, including the recently popularized deep learning techniques,aimforminimizingmean-squared-error(MSE)betweena denoised CT image and the ground truth, which results in losing important structural details due to over-smoothing, although the PSNR based performance measure looks great. In this work, we introduce a new perceptual similarity measure as the objective function for a deep convolutional neural network to facilitate CT image denoising. Instead of directly computing MSE for pixelto-pixel intensity loss, we compare the perceptual features of a denoised output against those of the ground truth in a feature space. Therefore, our proposed method is capable of not only reducing the image noise levels, but also keeping the critical structural information at the same time. Promising results have been obtained in our experiments with a large number of CT images. 
Qingsong Yang
Rensselaer Polytechnic Institute, USA
Ge Wang
Biomedical Imaging Center, Rensselaer Polytechnic Institute, USA
Pingkun Yan
Philips Research North America, USA
Mannudeep K. Kalra
Massachusetts General Hospital, Harvard Medical School, USA
  1. D. J. Brenner and E. J. Hall, “Computed tomography - an increasing source of radiation exposure,” New England Journal of Medicine, vol. 357, no. 22, pp. 2277–2284, 2007.
  2. A. B. De Gonzalez and S. Darby, “Risk of cancer from diagnostic xrays: estimates for the uk and 14 other countries,” The lancet, vol. 363, no. 9406, pp. 345–351, 2004.
  3. J. Ma, J. Huang, Q. Feng, H. Zhang, H. Lu, Z. Liang, and W. Chen, “Low-dose computed tomography image restoration using previous normal-dose scan,” Medical physics, vol. 38, no. 10, pp. 5713–5731, 2011.
  4. Y. Chen, X. Yin, L. Shi, H. Shu, L. Luo, J.-L. Coatrieux, and C. Toumoulin, “Improving abdomen tumor low-dose CT images using a fast dictionary learning based processing,” Physics in medicine and biology, vol. 58, no. 16, p. 5803, 2013.
  5. P. F. Feruglio, C. Vinegoni, J. Gros, A. Sbarbati, and R. Weissleder, “Block matching 3d random noise filtering for absorption optical projection tomography,” Physics in medicine and biology, vol. 55, no. 18, p. 5401, 2010.
  6. C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 2, pp. 295–307, 2016.
  7. H. Chen, Y. Zhang, W. Zhang, P. Liao, K. Li, J. Zhou, and G. Wang, “Low-dose CT denoising with convolutional neural network,” 2016. [Online]. Available: arXiv:1610.00321
  8. J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for realtime style transfer and super-resolution,” 2016. [Online]. Available: arXiv:1603.08155
  9. C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” 2016. [Online]. Available: arXiv:1609.04802
  10. M. Nixon and A. S. Aguado, Feature Extraction & Image Processing, 2nd ed. Academic Press, 2008.
  11. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 2014. [Online]. Available: arXiv:1409.1556
  12. A. Mahendran and A. Vedaldi, “Visualizing deep convolutional neural networks using natural pre-images,” Int J Comput Vis, vol. 120, pp. 233–255, 2016.
  13. S. Srinivas, R. K. Sarvadevabhatla, K. R. Mopuri, N. Prabhu, S. S. S. Kruthiventi, and R. V. Babu, “A taxonomy of deep convolutional neural nets for computer vision,” CoRR, 2016. [Online]. Available: arXiv:1601.06615
  14. Q. Yang, M. K. Kalra, A. Padole, J. Li, E. Hilliard, R. Lai, and G. Wang, “Big data from CT scanning,” JSM Biomedical Imaging Data Papers, vol. 2, no. 1, p. 1003, 2015.
  15. Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: Convolutional architecture for fast feature embedding,” 2014. [Online]. Available: arXiv:1408.5093
  16. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “BM3D image denoising with shape-adaptive principal component analysis,” in SPARS, 2009.