Redha Touati, William Trung Le and Samuel Kadoury
Article (2024)
|
Open Access to the full text of this document Published Version Terms of Use: Creative Commons Attribution Download (2MB) |
Abstract
Objective. Head and neck radiotherapy planning requires electron densities from different tissues for dose calculation. Dose calculation from imaging modalities such as MRI remains an unsolved problem since this imaging modality does not provide information about the density of electrons.
Approach. We propose a generative adversarial network (GAN) approach that synthesizes CT (sCT) images from T1-weighted MRI acquisitions in head and neck cancer patients. Our contribution is to exploit new features that are relevant for improving multimodal image synthesis, and thus improving the quality of the generated CT images. More precisely, we propose a Dual branch generator based on the U-Net architecture and on an augmented multi-planar branch. The augmented branch learns specific 3D dynamic features, which describe the dynamic image shape variations and are extracted from different view-points of the volumetric input MRI. The architecture of the proposed model relies on an end-to-end convolutional U-Net embedding network.
Results. The proposed model achieves a mean absolute error (MAE) of 18.76 ± 5.167 in the target Hounsfield unit (HU) space on sagittal head and neck patients, with a mean structural similarity (MSSIM) of 0.95 ± 0.09 and a Frechet inception distance (FID) of 145.60 ± 8.38. The model yields a MAE of 26.83 ± 8.27 to generate specific primary tumor regions on axial patient acquisitions, with a Dice score of 0.73 ± 0.06 and a FID distance equal to 122.58 ± 7.55. The improvement of our model over other state-of-the-art GAN approaches is of 3.8%, on a tumor test set. On both sagittal and axial acquisitions, the model yields the best peak signal-to-noise ratio of 27.89 ± 2.22 and to synthesize MRI from CT input.
Significance. The proposed model synthesizes both sagittal and axial CT tumor images, used for radiotherapy treatment planning in head and neck cancer cases. The performance analysis across different imaging metrics and under different evaluation strategies demonstrates the effectiveness of our dual CT synthesis model to produce high quality sCT images compared to other state-of-the-art approaches. Our model could improve clinical tumor analysis, in which a further clinical validation remains to be explored.
Uncontrolled Keywords
image generation; dynamic features; 3D multi-view image modeling; dual feature learning; adversarial network; generative network model
Additional Information: | Groupe de recherche: MedICAL Laboratory |
---|---|
Subjects: |
1900 Biomedical engineering > 1900 Biomedical engineering 1900 Biomedical engineering > 1901 Biomedical technology 2500 Electrical and electronic engineering > 2500 Electrical and electronic engineering |
Department: | Department of Computer Engineering and Software Engineering |
Research Center: | Other |
Funders: | NSERC / CRSNG, Fonds de recherche du Québec - Santé |
Grant number: | GPIN-2020-06558, 293740 |
PolyPublie URL: | https://publications.polymtl.ca/58932/ |
Journal Title: | Physics in Medicine & Biology (vol. 69, no. 15) |
Publisher: | OIP Publishing |
DOI: | 10.1088/1361-6560/ad611a |
Official URL: | https://doi.org/10.1088/1361-6560/ad611a |
Date Deposited: | 29 Jul 2024 13:39 |
Last Modified: | 10 Feb 2025 19:47 |
Cite in APA 7: | Touati, R., Le, W. T., & Kadoury, S. (2024). Multi-planar dual adversarial network based on dynamic 3D features for MRI-CT head and neck image synthesis. Physics in Medicine & Biology, 69(15), 155012 (36 pages). https://doi.org/10.1088/1361-6560/ad611a |
---|---|
Statistics
Total downloads
Downloads per month in the last year
Origin of downloads
Dimensions