Research Papers:
Deep learning-based whole-body PSMA PET/CT attenuation correction utilizing Pix-2-Pix GAN
PDF | Full Text | Supplementary Files | How to cite | Press Release
Metrics: PDF 838 views | Full Text 1819 views | ?
Abstract
Kevin C. Ma1,2, Esther Mena2, Liza Lindenberg2, Nathan S. Lay1,2, Phillip Eclarinal2, Deborah E. Citrin3, Peter A. Pinto4, Bradford J. Wood5, William L. Dahut6, James L. Gulley7, Ravi A. Madan6, Peter L. Choyke1,2, Ismail Baris Turkbey1,2 and Stephanie A. Harmon1,2
1 Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
2 Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
3 Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
4 Urologic Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
5 Center for Interventional Oncology, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
6 Genitourinary Malignancies Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
7 Center for Immuno-Oncology, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
Correspondence to:
Stephanie A. Harmon, | email: | [email protected] |
Keywords: deep learning; PSMA PET; attenuation correction
Received: September 30, 2023 Accepted: April 19, 2024 Published: May 07, 2024
ABSTRACT
Purpose: Sequential PET/CT studies oncology patients can undergo during their treatment follow-up course is limited by radiation dosage. We propose an artificial intelligence (AI) tool to produce attenuation-corrected PET (AC-PET) images from non-attenuation-corrected PET (NAC-PET) images to reduce need for low-dose CT scans.
Methods: A deep learning algorithm based on 2D Pix-2-Pix generative adversarial network (GAN) architecture was developed from paired AC-PET and NAC-PET images. 18F-DCFPyL PSMA PET-CT studies from 302 prostate cancer patients, split into training, validation, and testing cohorts (n = 183, 60, 59, respectively). Models were trained with two normalization strategies: Standard Uptake Value (SUV)-based and SUV-Nyul-based. Scan-level performance was evaluated by normalized mean square error (NMSE), mean absolute error (MAE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR). Lesion-level analysis was performed in regions-of-interest prospectively from nuclear medicine physicians. SUV metrics were evaluated using intraclass correlation coefficient (ICC), repeatability coefficient (RC), and linear mixed-effects modeling.
Results: Median NMSE, MAE, SSIM, and PSNR were 13.26%, 3.59%, 0.891, and 26.82, respectively, in the independent test cohort. ICC for SUVmax and SUVmean were 0.88 and 0.89, which indicated a high correlation between original and AI-generated quantitative imaging markers. Lesion location, density (Hounsfield units), and lesion uptake were all shown to impact relative error in generated SUV metrics (all p < 0.05).
Conclusion: The Pix-2-Pix GAN model for generating AC-PET demonstrates SUV metrics that highly correlate with original images. AI-generated PET images show clinical potential for reducing the need for CT scans for attenuation correction while preserving quantitative markers and image quality.
All site content, except where otherwise noted, is licensed under a Creative Commons Attribution 4.0 License.
PII: 28583