Please use this identifier to cite or link to this item:
http://localhost/xmlui/handle/1/91| Title: | Survival prediction in high grade gliomas using magnetic resonance imaging |
| Authors: | Malhotra, Radhika |
| Keywords: | Department of Electronics & Communication Engineering |
| Issue Date: | 2024 |
| Publisher: | NIT Jalandhar |
| Abstract: | Gliomas, a heterogeneous group of primary brain tumors, offer a significant challenge in both prognosis and treatment due to their diverse molecular and anatomical characteristics. Grade IV gliomas, known as Glioblastoma Multiforme (GBM), represent the most aggressive and malignant type of tumor with a median survival often measured in months rather than years. Therefore, accurate estimation of the survival time of GBM patients is indispensable for designing many treatment methodologies and follow-up procedures. Survival prediction in high-grade gliomas (HGG) plays a crucial role in the management and treatment planning for patients facing this aggressive form of brain cancer. Radiological imaging-based survival prediction methods offer detailed insights into the characteristics of the brain tumor, including its size, location, and extent of infiltration. Out of all imaging modalities, Magnetic Resonance Imaging (MRI) offers a non-invasive method for precise identification and characterization of the brain's internal structures. MRI holds paramount importance in imaging brain tumors because of its excellent tissue contrast ability. This imaging modality can easily differentiate between different types of brain tumors based on their distinctive features. Moreover, the absence of ionizing radiation in MRI makes it a safe and repeatable imaging technique, essential for longitudinal monitoring and assessment of treatment response. The main aim of this research is to develop a survival prediction system to estimate the survival times of HGG patients by utilizing the different modalities of MRI. The system integrates learning algorithms like Machine learning (ML) and Deep learning (DL) with radiological imaging to enhance its predictive power. This study is performed on tumorous brain volumes of multi-modal datasets available online as the Brain Tumor Segmentation (BraTS) benchmark. Three different series of BraTS datasets, namely, BraTS 2018, BraTS 2019, and BraTS 2020, are utilized in this research work. It features scans from multiple institutions and diverse patient populations, accumulating all tumor variabilities that exit in real scenarios. The proposed work is broadly classified into the stages of (i) 2D segmentation, (ii) survival prediction using 2D segmented output, (iii) 3D tumor segmentation, (iv) Radiomics feature extraction, and (v) Survival prediction using extracted Radiomics features. The first step is the 2D segmentation of the three sub-regions of Gliomas using three MRI modalities: FLAIR, T2, and T1-CE. For this purpose, a DL-based Context Bridge-Dense Dilated Residual Net (CB-D2RNet) architecture with multi-fold contributions is proposed. In this, a Dense Dilated Convolutional (DDC) block is designed that integrates inception like-module with residual connections and multi-scale dilated convolutional blocks. The four cascade branches of this DDC block fuse feature maps from the encoder side to the decoder while enhancing the information propagation at different levels. The proper selection of the dilation rate in every encoder block is done for exponential scaling of the receptive fields. This preserves even small-size targets, which is essential for recognizing enhancing or necrotic regions in HGGs. Furthermore, the skip connections in the original U-Net are redesigned to alleviate the contextual gap between different levels of encoders and decoders. The proposed model has only one convolution operation per encoder/decoder block, greatly reducing the number of model parameters to around 6.7 million. Next, two different loss functions are proposed to facilitate proper model training and tumor segmentation. These loss functions consider the small distribution of tumor regions and effectively handle the class imbalance problem. Moreover, these loss functions add weight to the total loss by incorporating a negative logarithm of the dice coefficient, which provides a regularization effect and limits the segmentation errors. The first loss function is proposed to delineate the complete abnormal region from MRI modalities and enhance the binary segmentation accuracies. It achieves Dice Similarity Coefficient (DSC) for the complete tumor region extraction as 0.960 for BraTS 2018 dataset, whereas 0.962 for the BraTS 2019 dataset. The second loss function offers effective segmentation of three HGG tumor sub-regions and effectively trains the proposed segmentation model (CB-D2RNet). It obtains training Dice Similarity Coefficient (DSC) for the Whole Tumor (WT), Tumor Core (TC), and Enhancing Tumor (ET) as 0.982, 0.987, and 0.976, respectively for the BraTS 2018 dataset, whereas 0.983, 0.989, and 0.977 respectively for the BraTS 2019 dataset. After 2D tumor segmentation, the extracted tumor sub-regions and MRI modalities are employed to build a DL-based survival prediction model. In this model, the overall survival classification pipeline is integrated with modality-specific and modality-concatenated networks to enhance the model's classification accuracy. The modality-specific pathway has three separate Convolutional Neural Networks (CNNs) for extracting rich predictive information from three sub-regions of HGGs. Also, in this pathway, the image vectors of three respective modalities are fused with the higher-level features to overcome the discrepancy between low-level and high-level feature representations. Furthermore, to maintain the intra-modality correlations, a modality-concatenated pathway is also added to the survival prediction system. The complete system is backed up with the visual interpretability analysis by generating attention maps over different convolutional levels and non-predicted labels. Overall, the proposed system performs competitively in classifying HGG patients into three survival groups, namely, short, mid, and long survivors and achieves the classification accuracy, sensitivity, and specificity of more than 99% on different datasets. Due to the limited 3D contextual extraction capability of 2D models, 3D tumor segmentation is performed in the next step. For this purpose, a 3D-Multi Module Fusion U-Net (3D-M2FU-Net) model with limited computational requirements is proposed. In this, Spatially Separated Convolutional (SSC) operations are introduced to strike a balance between maintaining the segmentation accuracy and limiting the computational demands. The filter is decomposed into two cascaded filters, in which the first convolutional operation is carried across the spatial dimensions, and the second operation is a temporal convolution across the third dimension. Owning to this, only 1.7 million trainable parameters are used for this segmentation model. Also, a Fused Atrous Spatially Separated Module (FASSM) is proposed with three serially connected streams to mitigate the information loss due to textural and morphological variations of gliomas. In each of the three streams, the minute tumor details are extracted by parallelizing the input into three convolutional blocks. Furthermore, the different skip connectivity levels of the encoder-decoder are equipped with three different Boundary-guided Modules (BGM) to eradicate the loss of nonuniform tumor boundaries. Finally, a Recurrent Spatially Separated Module (RSSM) is included in the decoder to facilitate a better flow of coefficients between different layers. This architecture achieves DSC as 0.932, 0.947, and 0.919, for WT, TC, and ET, respectively for BraTS 2020 dataset. Earlier, in the second stage of this work, survival prediction is done using DL-based models since the input data is in the form of 2D slices of segmented tumor. But after the 3D tumor segmentation step, the input is limited because the complete segmented 3D volumes have to be utilized for the survival prediction (instead of 2D slices). Therefore, to cope with this, Traditional Machine Learning (TML) algorithms are employed because of their applicability to lesser input samples. For this, the Radiomics feature extraction process is employed to transform the radiological images and segmented volumes into mineable data with prognostic information. Therefore, the radiomics features are extracted from 3D segmented tumor sub-regions individually. The six Radiomics feature categories employed in this work are: First-order statistics, 3D morphological features, Gray Level Co-occurrence Matrix (GLCM), Gray-level dependence matrix (GLDM), Gray-level run length matrix (GLRLM), and Gray-level size zone matrix (GLSZM) features. For the feature extraction process, three cases are evaluated for complete validation of the proposed survival prediction system. In the first case, the radiomics features are extracted from the available ground-truth masks of the BraTS 2020 dataset, and then a survival prediction system is proposed. For the second case, the radiomics features are extracted from 3D segmented output obtained from the proposed 3D segmentation architecture (3D-M2FU-Net) and then survival prediction is conducted. In the third case, the radiomics features are extracted from 3D segmented output obtained using an existing segmentation model and then, survival prediction is performed. On visualizing the intensity distributions of these extracted features, a problem of class overlap is observed in which samples from different classes show similar characteristics and occupy a common region in the data dimensional space. To overcome this problem, two novel class-wise feature enhancement strategies are developed, and then the survival classification of HGG patients is conducted. In the first strategy, the feature values of mid and long-survivor classes are tuned according to their differences with the short-survivor class values. The tuning is performed by estimating the difference between the feature values of two survival classes and then linearly scaling this difference. This strategy is firstly designed using the ground truth masks of the dataset (first case). Then, the developed method is validated using the 3D segmented output of proposed 3D segmentation architecture (second case). Additionally, in the first strategy, the non-gaussian nature of the enhanced features is observed by scrutinizing its probability distributions. Therefore, feature transformation techniques like Box-Cox and logarithmic are utilized to maintain the normality of the enhanced data. This is done to satisfy the requirement of intensity standardization, a crucial data processing approach utilized before feeding the feature values to the classifier. Hence, by incorporating these steps, the complete system yields an overall testing classification accuracy, precision, recall, and F1-score of more than 99% on BraTS datasets. In the second enhancement strategy, the feature values of mid and long-survivor classes are exponentially scaled to achieve the prominence of multi-class separability without any information loss. The exponential function is calculated for the difference in the feature values of the two respective classes using variable scaling factors. This enhancement strategy assigns different weights to the feature values and transforms the raw features to a different space where class separability becomes more prominent. This strategy uses the 3D segmented output obtained using an existing segmentation model (third case). Furthermore, the proposed strategy is applied to quantify the best feature class and the tumor region. The experimental analysis shows that the morphological features computed from the WT region of HGG are the most informative for HGG survival classification. As a summary of the experimental studies, the survival classification accuracy of greater than 99% is attained on BraTS datasets using the Gaussian Naïve Bayes classifier. To conclude, the present work has designed an automated end-to-end survival prediction system for HGG patients using multi-modal MRIs. The proposed system can contribute significantly to assist radiologists in planning the different treatment methodologies for HGG patients based on their chances of survival. It also holds the potential to enhance the clinical management of HGG patients by providing a reliable prognostic tool that is based on radiological imaging and ML techniques. |
| URI: | http://localhost/xmlui/handle/1/91 |
| Appears in Collections: | PHD - Thesis |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Radhikamalhotra_18504002_Thesis (1).pdf | 23.87 MB | Adobe PDF | ![]() View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
