<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Archiving and Interchange DTD v2.3 20070202//EN" "archivearticle.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="methods-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Bohr. Iam.</journal-id>
<journal-title>BOHR International Journal of Internet of things, Artificial Intelligence and Machine Learning</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Bohr. Iam.</abbrev-journal-title>
<issn pub-type="epub">2583-5521</issn>
<publisher>
<publisher-name>BOHR</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.54646/bijiam.2022.10</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Methods</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Enhanced 3D brain tumor segmentation using assorted precision training</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Pandya</surname> <given-names>Adwaitt</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Oguine</surname> <given-names>Ozioma Collins</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="corresp" rid="c002"><sup>&#x002A;</sup></xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Bhargava</surname> <given-names>Harita</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c003"><sup>&#x002A;</sup></xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Zade</surname> <given-names>Shrikant</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c004"><sup>&#x002A;</sup></xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Computer Science, Oriental Institute of Science and Technology</institution>, <addr-line>Bhopal</addr-line>, <country>India</country></aff>
<aff id="aff2"><sup>2</sup><institution>Department of Computer Science, University of Abuja</institution>, <addr-line>Abuja</addr-line>, <country>Nigeria</country></aff>
<author-notes>
<corresp id="c001">&#x002A;Correspondence: Adwaitt Pandya, <email>adwaitt1999@gmail.com</email></corresp>
<corresp id="c002">Ozioma Collins Oguine, <email>oziomaoguine007@gmail.com</email></corresp>
<corresp id="c003">Harita Bhargava, <email>haritabhargava28@gmail.com</email></corresp>
<corresp id="c004">Shrikant Zade, <email>cdzshrikant@gmail.com</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>21</day>
<month>12</month>
<year>2022</year>
</pub-date>
<volume>1</volume>
<issue>1</issue>
<fpage>65</fpage>
<lpage>69</lpage>
<history>
<date date-type="received">
<day>28</day>
<month>10</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>08</day>
<month>11</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2022 Pandya, Oguine, Bhargava and Zade.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Pandya, Oguine, Bhargava and Zade</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by-nc-nd/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>A brain tumor is a medical disorder faced by individuals of all demographics. Medically, it is described as the spread of non-essential cells close to or throughout the brain. Symptoms of this ailment include headaches, seizures, and sensory changes. This research explores two main categories of brain tumors: benign and malignant. Benign spreads steadily, and malignant express growth makes it dangerous. Early identification of brain tumors is a crucial factor for the survival of patients. This research provides a state-of-the-art approach to the early identification of tumors within the brain. We implemented the SegResNet architecture, a widely adopted architecture for three-dimensional segmentation, and trained it using the automatic multi-precision method. We incorporated the dice loss function and dice metric for evaluating the model. We got a dice score of 0.84. For the tumor core, we got a dice score of 0.84; for the whole tumor, 0.90; and for the enhanced tumor, we got a score of 0.79.</p>
</abstract>
<kwd-group>
<kwd>brain tumor</kwd>
<kwd>3D segmentation</kwd>
<kwd>brain tumor segmentation</kwd>
<kwd>3D convolutional neural network</kwd>
<kwd>fully convolutional neural network</kwd>
</kwd-group>
<counts>
<fig-count count="5"/>
<table-count count="1"/>
<equation-count count="1"/>
<ref-count count="18"/>
<page-count count="5"/>
<word-count count="2816"/>
</counts>
</article-meta>
</front>
<body>
<sec id="S1" sec-type="intro">
<title>1. Introduction</title>
<p>A tumor is an abnormal growth of cells in the brain that may or may not be cancerous. Tumors are generally classified into two classes: benign and malignant. Benign tumors are considered non-cancerous, i.e., they grow locally and do not spread to other tissues. They can be fatal if they develop near vital organs like the brain, even after being non-cancerous. Malignant tumors are considered cancerous. New cells are constantly produced in our body to replace the old ones; sometimes DNA gets damaged during this renewal process, so the new cells develop abnormally. These cells continue to multiply faster, thus forming a tumor. Malignant tumors can spread and affect other tissues. Tumors that affect the central nervous system are known as gliomas. Following are the constituents of gliomas:</p>
<list list-type="simple">
<list-item>
<label>&#x2022;</label>
<p><bold>Edema:</bold> Finger-like projection, an agglomerate of fluid or water. FLAIR and T2-weighted sequences produce the best results.</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p><bold>Necrosis:</bold> Collection of dead cells. Best seen in the T1 post-contrast sequence.</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p><bold>Enhancing tumor:</bold> Indicates breakdown of the blood&#x2013;brain barrier. Seen in T1c post-contrast sequence.</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p><bold>Non-enhancing tumor</bold>: Seen in regions not included in edema, necrosis, or enhancing tumor.</p>
</list-item>
</list>
<p>There are many reasons why a person can have a brain tumor-like growth of cells uncontrollably in the brain due to mutation or defect in a gene. This is a reason for causing a brain tumor. Exposure to large amounts of X-rays is also an environmental cause which leads to the development of brain tumors. A tumor&#x2019;s effects on your body are evaluated by its size, location, and growth rate. General symptoms include changes in the pattern of headaches, nausea or vomiting, vision problems like blurred vision, tiredness, speech and hearing difficulties, and memory problems. Early diagnosis of a tumor provides the patient with the best chance of successful treatment. There are fewer chances of survival, a high cost of treatment, and many more problems arise if the care is delayed.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption><p><bold>(A)</bold> Two randomly selected images from the T1wCE modality. <bold>(B)</bold> Intensities of the two images before normalization. <bold>(C)</bold> Intensities of the two images after normalization. It is imperative to note that certain types of tumors are best seen in different modalities, like edema, which is best seen in T2-weighted sequences and FLAIR images. Necrosis is best visible in the T1 post-contrast sequence, and an enhancing tumor is best seen in the T1c post-contrast sequence (<xref ref-type="bibr" rid="B13">13</xref>).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijiam-2022-10-g001.tif"/>
</fig>
<p>Early diagnosis improves the outcomes of treatment. Diagnosing a brain tumor generally begins with a magnetic resonance imaging (MRI) scan. MRI is an imaging technique that utilizes magnetic fields and radio waves to create detailed images of the organs and tissues. MRI can be used to measure the tumor&#x2019;s size. For diagnosing a tumor, the accuracy of conventional MRI is generally satisfactory, but we should not rely heavily on it. One of the important and efficient techniques for diagnosing tumors is brain tumor segmentation. The technique of separating tumors from other brain parts in an MRI scan of the brain is called brain tumor segmentation. It separates tumors from normal brain tissues. Tumor segmentation helps in correctly identifying the spatial location of a tumor. Brain tumor segmentation proves to be useful for diagnosis and treatment planning. However, sometimes it is hard to segment the tumor because of irregular boundaries in MRI scans. If a tumor is detected on time due to segmentation, it will prove to be very convenient from the doctor&#x2019;s perspective to commence treatment planning as soon as possible.</p>
<p>Our dataset consists of a neuroimaging informatics technology initiative (NIFTI) file format. About 10 years ago, the NIFTI file format was envisioned as a replacement for the 7.5 file format for analysis. In image informatics for neuroscience and even neuroradiology research, NIFTI files are frequently employed. For our project, we used the brain tumor segmentation (BraTS) dataset. The Radiological Association of North America (RSNA), the American Society of Neuroradiology (ASNR), and the Medical Image Computing and Computer-Assisted Interventions (MICCAI) society are working together to arrange the BraTS challenge. The model used by us for segmentation is &#x201C;SegResNet,&#x201D; and we have trained it on the BraTS 2021 (<xref ref-type="bibr" rid="B1">1</xref>&#x2013;<xref ref-type="bibr" rid="B5">5</xref>) (Task 1) dataset. The following work contains a detailed description of the dataset, proposed methodology, comparative analysis, and results.</p>
</sec>
<sec id="S2">
<title>2. Theoretical background</title>
<sec id="S2.SS1">
<title>2.1. Technology stack</title>
<p>We used the Kaggle notebooks for training, validation, and testing our model. By default, Kaggle provides P100 GPUs for all notebooks with 16 GB of RAM and 16 GB of storage space. We used PyTorch version 1.10.0 and MONAI 0.7.0 for coding. We also used intensity normalization (<xref ref-type="bibr" rid="B6">6</xref>) version 2.1.1 for normalizing intensities.</p>
</sec>
<sec id="S2.SS2">
<title>2.2. Dataset</title>
<p>We used the BraTS 2021 task-1 dataset. The collection includes segmentation masks, Native (T1), T1-weighted (T1Gd), T2-weighted (T2), and T2-Fluid Attenuated Inversion Recovery (Recovery) NIFTI volumes for 1,251 individuals from various sources and in axial, sagittal, and coronal orientation. All of the files were NIFTI volumes containing 240 image slices. Each image was 240 &#x00D7; 155 in size.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption><p>Augmented samples of different modalities. Each &#x201C;image channel&#x201D; corresponds to a different modality.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijiam-2022-10-g002.tif"/>
</fig>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption><p>Brain tumor segmentation masks.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijiam-2022-10-g003.tif"/>
</fig>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption><p><bold>(A)</bold> Average loss (left) and average dice score (right) per epoch. <bold>(B)</bold> Dice score for tumor core (left), whole tumor (center), and enhanced tumor (right).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijiam-2022-10-g004.tif"/>
</fig>
</sec>
<sec id="S2.SS3">
<title>2.3. Literature survey</title>
<p>For the purpose of segmenting brain tumors, Havaei et al. proposed a CNN architecture that not only utilized local and global features concurrently. They obtained dice scores of 0.85 for segmenting the entire tumor, 0.78 for segmenting the tumor core, and 0.73 for improving tumor segmentation (<xref ref-type="bibr" rid="B7">7</xref>).</p>
<p>Pereira et al. (<xref ref-type="bibr" rid="B8">8</xref>) explored a way to counter large spatial and structural variability by incorporating small 3 &#x00D7; 3 kernels in their proposed architecture. They attained a dice score of 0.88 on the whole tumor segmentation; for tumor core segmentation, they were able to get a score of 0.83, and for enhancing tumor, they got 0.77. Myronenko et al. (<xref ref-type="bibr" rid="B9">9</xref>) described an encoder-decoder-like architecture and a variational autoencoder branch. Their model yielded a dice score of 0.81, 0.90, and 0.86 on enhancing tumor, whole tumor, and tumor core segmentation.</p>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption><p>The model&#x2019;s output <bold>(top)</bold> is compared to the actual tumor <bold>(bottom)</bold>. The yellow region is the enhanced tumor, the yellow and green regions constitute the tumor core, and the yellow, red, and green regions constitute the whole tumor.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijiam-2022-10-g005.tif"/>
</fig>
<p>An anisotropic and dilated convolution filter-based cascade of a fully convolutional neural network was proposed by Wang et al. (<xref ref-type="bibr" rid="B10">10</xref>). They broke the issue down into a series of binary categorization issues. On the entire tumor, tumor core, and improving tumor segmentation, their model received scores of 0.78, 0.87, and 0.77, respectively.</p>
<p>Mohammadreza et al. (<xref ref-type="bibr" rid="B11">11</xref>) extracted text-on-descriptor features such as histograms and first-order intensities, which were then fed into a Random Forest Classifier. They scored 0.84 on the whole tumor, 0.82 on the enhancing tumor, and 0.78 on the tumor core segmentation.</p>
<p>Lyu et al. (<xref ref-type="bibr" rid="B12">12</xref>) used a 2-stage model; for stage 1, they used an encoder-decoder-like architecture and variation autoencoder regularization. In stage 2, the network uses attention gates and is trained on a dataset formed by stage 1 output. Their dice scores for the whole tumor, tumor core, and enhancing tumor were 0.87, 0.83, and 0.82, respectively.</p>
</sec>
</sec>
<sec id="S3">
<title>3. Methodology</title>
<p><bold>Preprocessing:</bold> The dataset comes from various sources. We used the intensity normalization technique described by Shinohara et al. (<xref ref-type="bibr" rid="B6">6</xref>) over every image of every modality (FLAIR, T1w, T1wCE, and T2w) to solve the difference in intensities. Separate normalizers were used for each modality and were trained on the images belonging to their respective modalities. Since the images were in different orientations, we oriented all of them in the RAS orientation.</p>
<p><bold>Image Augmentations:</bold> We cropped the image, keeping the region of interest of 224 &#x00D7; 224 &#x00D7; 144. We then randomly flipped the images across all three axes and randomly scaled and shifted the intensities. We used no augmentations for validation or testing.</p>
<p><bold>Loss Functions and Metric:</bold> We used the dice loss function (<xref ref-type="bibr" rid="B14">14</xref>) and metric, a region-based loss function, to calculate similarities. Mathematically, the dice coefficient can be expressed as follows:</p>
<disp-formula id="S3.Ex1">
<mml:math id="M1">
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:mi>D</mml:mi>
</mml:mpadded>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mo largeop="true" symmetric="true">&#x2211;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:mrow>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mtext>i</mml:mtext>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2062;</mml:mo>
<mml:msub>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mtext>i</mml:mtext>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mo largeop="true" symmetric="true">&#x2211;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:msubsup>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mtext>i</mml:mtext>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mo largeop="true" symmetric="true">&#x2211;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:msubsup>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mtext>i</mml:mtext>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where <italic>D</italic> is the dice coefficient, <italic>p</italic> and <italic>g</italic> are pairs of corresponding pixel values.(<xref ref-type="bibr" rid="B15">15</xref>) For training, we added a small constant to the denominator to tackle the scenarios in which the denominator becomes less than 0.</p>
<p><bold>Hyperparameters:</bold> We initialized the model with 16 filters, keeping the number of input channels equal to 4 and yielding an output of 3 channels, corresponding to the three classes discussed earlier. We also applied a dropout with a dropout rate of 0.2. Due to memory constraints, we kept the batch size to 1. We used the Adam optimizer for training with a learning rate of 0.0001. We applied L2 regularization with a regularization coefficient set to 0.00001 and trained the model for 10 epochs.</p>
<p><bold>Model Architecture:</bold> We used the SegResNet architecture with the number of downsampling blocks in each layer being 1, 2, 2, and 4, respectively. The number of upsampling blocks in each layer is 1, 1, and 1, respectively.</p>
<p><bold>Training:</bold> We trained our model for 10 epochs and saved the best-performing model. We modified the traditional training technique and used the mixed-precision (<xref ref-type="bibr" rid="B16">16</xref>) training method, which enhances performance and efficiency and reduces memory requirements. We used automatic mixed precision for both the training and validation tasks.</p>
<table-wrap position="float" id="T1">
<label>TABLE 1</label>
<caption><p>Comparison of dice scores obtained by different methods.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Researches</td>
<td valign="top" align="left">Number of test images</td>
<td valign="top" align="left">Dice scores</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Havaei et al. (<xref ref-type="bibr" rid="B7">7</xref>)</td>
<td valign="top" align="left">200 2d slices and approximately 6,000 2D images</td>
<td valign="top" align="left">WT = 0.85, TC = 0.78, ET = 0.73</td>
</tr>
<tr>
<td valign="top" align="left">Pereira et al. (<xref ref-type="bibr" rid="B8">8</xref>)</td>
<td valign="top" align="left">The training set contains 20 HGG and 10 LGG.</td>
<td valign="top" align="left">WT = 0.88, TC = 0.83, ET = 0.77</td>
</tr>
<tr>
<td valign="top" align="left">Myronenko et al. (<xref ref-type="bibr" rid="B9">9</xref>)</td>
<td valign="top" align="left">Training dataset included 285 cases (210 HGG and 75 LGG).</td>
<td valign="top" align="left">ET = 0.81, WT = 0.90, CT = 0.86</td>
</tr>
<tr>
<td valign="top" align="left">Wang et al. (<xref ref-type="bibr" rid="B10">10</xref>)</td>
<td valign="top" align="left">The training set contains images from 285 patients (210 HGG and 75 LGG).</td>
<td valign="top" align="left">WT = 0.7831, CT = 0.8739, ET = 0.7748</td>
</tr>
<tr>
<td valign="top" align="left">Soltaninejad et al. (<xref ref-type="bibr" rid="B11">11</xref>)</td>
<td valign="top" align="left">The dataset was tested on 11 multimodal images and the BRATS 2013 clinical dataset using 30 multimodal images.</td>
<td valign="top" align="left">WT = 0.80, TC = 0.89</td>
</tr>
<tr>
<td valign="top" align="left">Lyu et al. (<xref ref-type="bibr" rid="B12">12</xref>)</td>
<td valign="top" align="left">The BraTS 2020 dataset containing 259 HGG and 110 LGG cases</td>
<td valign="top" align="left">ET = 0.79, WT = 0.90, TC = 0.83</td>
</tr>
<tr>
<td valign="top" align="left">Proposed</td>
<td valign="top" align="left">1251 NIFTI volumes of 240 image slices each</td>
<td valign="top" align="left">ET = 0.71, WT = 0.90, TC = 0.84</td>
</tr>
</tbody>
</table></table-wrap>
</sec>
<sec id="S4" sec-type="results">
<title>4. Result analysis</title>
<p>We converged our loss to 0.11 and got an average dice score of 0.84 on the validation set. As for the separate classes, our dice scores were as follows (on the validation set):</p>
<p>TC = 0.84</p>
<p>WT = 0.90</p>
<p>ET = 0.79</p>
<p>On the test set, we got a mean dice score of 0.86; in the TC class, the dice score was 0.86; in the WT class, it was 0.92; and finally, on ET, it was 0.81.</p>
</sec>
<sec id="S5" sec-type="discussion">
<title>5. Discussion</title>
<p>Brain tumor segmentation proves to be an effective tool for accurately diagnosing the tumor and its constituents. Our model can segment a tumor from an MRI image efficiently. The model was trained on 750 NIFTI volumes and validated and tested on 250 NIFTI volumes. We were able to get our loss down to 0.11 and got a mean dice score of 0.84. We believe that more data could significantly improve the model&#x2019;s performance for future research.</p>
</sec>
<sec id="S6" sec-type="conclusion">
<title>6. Conclusion</title>
<p>In this study, we used the SegResNet architecture to segment the brain tumor. Our model produces great results on 200 test cases. Our model&#x2019;s best score produced a dice score of 0.86 on TC, 0.92 on WT, and 0.81 on ET. In contrast, the mean dice score was 0.84.</p>
</sec>
<sec id="S7">
<title>Disclosure</title>
<p>The authors declare that they have no competing interests with anyone in publishing this manuscript.</p>
</sec>
<sec id="S8" sec-type="author-contributions">
<title>Author contributions</title>
<p>All authors made substantial contributions in conscripting the manuscript and revising it critically for important intellectual content, agreeing to submit it to the current journal, and final approval of the version.</p>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="B1"><label>1.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Baid</surname> <given-names>U</given-names></name> <name><surname>Ghodasara</surname> <given-names>S</given-names></name> <name><surname>Mohan</surname> <given-names>S</given-names></name> <name><surname>Bilello</surname> <given-names>M</given-names></name> <name><surname>Calabrese</surname> <given-names>E</given-names></name> <name><surname>Colak</surname> <given-names>E</given-names></name><etal/></person-group> <article-title>The RSNA-ASNR-MICCAI BraTS 2021 benchmark on brain tumor segmentation and radiogenomic classification.</article-title> <source><italic>arXiv</italic></source> [<comment>Preprint</comment>]. (<year>2021</year>).</citation></ref>
<ref id="B2"><label>2.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Menze</surname> <given-names>BH</given-names></name> <name><surname>Jakab</surname> <given-names>A</given-names></name> <name><surname>Bauer</surname> <given-names>S</given-names></name> <name><surname>Kalpathy-Cramer</surname> <given-names>J</given-names></name> <name><surname>Farahani</surname> <given-names>K</given-names></name> <name><surname>Kirby</surname> <given-names>J</given-names></name><etal/></person-group> <article-title>The multimodal brain tumor image segmentation benchmark (BRATS).</article-title> <source><italic>IEEE Trans Med Imaging.</italic></source> (<year>2015</year>) <volume>34</volume>:<fpage>1993</fpage>&#x2013;<lpage>2024</lpage>. <pub-id pub-id-type="doi">10.1109/TMI.2014.2377694</pub-id></citation></ref>
<ref id="B3"><label>3.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bakas</surname> <given-names>S</given-names></name> <name><surname>Akbari</surname> <given-names>H</given-names></name> <name><surname>Sotiras</surname> <given-names>A</given-names></name> <name><surname>Bilello</surname> <given-names>M</given-names></name> <name><surname>Rozycki</surname> <given-names>M</given-names></name> <name><surname>Kirby</surname> <given-names>JS</given-names></name><etal/></person-group> <article-title>Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features.</article-title> <source><italic>Nat Sci Data.</italic></source> (<year>2017</year>) <volume>4</volume>:<issue>170117</issue>. <pub-id pub-id-type="doi">10.1038/sdata.2017.117</pub-id></citation></ref>
<ref id="B4"><label>4.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bakas</surname> <given-names>S</given-names></name> <name><surname>Akbari</surname> <given-names>H</given-names></name> <name><surname>Sotiras</surname> <given-names>A</given-names></name> <name><surname>Bilello</surname> <given-names>M</given-names></name> <name><surname>Rozycki</surname> <given-names>M</given-names></name> <name><surname>Kirby</surname> <given-names>J</given-names></name><etal/></person-group> <article-title>Segmentation labels and radiomic features for the pre-operative scans of the TCGA-GBM collection.</article-title> <source><italic>Cancer Imaging Arch.</italic></source> (<year>2017</year>) <volume>286</volume>. <pub-id pub-id-type="doi">10.7937/K9/TCIA.2017.KLXWJJ1Q</pub-id></citation></ref>
<ref id="B5"><label>5.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bakas</surname> <given-names>S</given-names></name> <name><surname>Akbari</surname> <given-names>H</given-names></name> <name><surname>Sotiras</surname> <given-names>A</given-names></name> <name><surname>Bilello</surname> <given-names>M</given-names></name> <name><surname>Rozycki</surname> <given-names>M</given-names></name> <name><surname>Kirby</surname> <given-names>J</given-names></name><etal/></person-group> <article-title>Segmentation labels and radiomic features for the pre-operative scans of the TCGA-LGG collection.</article-title> <source><italic>Cancer Imaging Arch.</italic></source> (<year>2017</year>). <pub-id pub-id-type="doi">10.7937/K9/TCIA.2017.KLXWJJ1Q</pub-id></citation></ref>
<ref id="B6"><label>6.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shinohara</surname> <given-names>RT</given-names></name> <name><surname>Sweeney</surname> <given-names>EM</given-names></name> <name><surname>Goldsmith</surname> <given-names>J</given-names></name> <name><surname>Shiee</surname> <given-names>N</given-names></name> <name><surname>Mateen</surname> <given-names>FJ</given-names></name> <name><surname>Calabresi</surname> <given-names>PA</given-names></name><etal/></person-group> <article-title>Statistical normalization techniques for magnetic resonance imaging.</article-title> <source><italic>Neuroimage Clin.</italic></source> (<year>2014</year>) <volume>6</volume>:<fpage>9</fpage>&#x2013;<lpage>19</lpage>. <pub-id pub-id-type="doi">10.1016/j.nicl.2014.08.008</pub-id></citation></ref>
<ref id="B7"><label>7.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Havaei</surname> <given-names>M</given-names></name> <name><surname>Davy</surname> <given-names>A</given-names></name> <name><surname>Warde-Farley</surname> <given-names>D</given-names></name> <name><surname>Biard</surname> <given-names>A</given-names></name> <name><surname>Courville</surname> <given-names>A</given-names></name> <name><surname>Bengio</surname> <given-names>Y</given-names></name><etal/></person-group> <article-title>&#x2019;Brain tumor segmentation with deep neural networks.</article-title> <source><italic>arXiv</italic></source> [<comment>Preprint</comment>]. (<year>2015</year>).</citation></ref>
<ref id="B8"><label>8.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pereira</surname> <given-names>S</given-names></name> <name><surname>Pinto</surname> <given-names>A</given-names></name> <name><surname>Alves</surname> <given-names>V</given-names></name> <name><surname>Silva</surname> <given-names>C</given-names></name></person-group>. <article-title>Brain tumor segmentation using convolutional neural networks in MRI images.</article-title> <source><italic>IEEE Trans Med Imaging.</italic></source> (<year>2016</year>) <volume>35</volume>:<fpage>1240</fpage>&#x2013;<lpage>51</lpage>. <pub-id pub-id-type="doi">10.1109/TMI.2016.2538465</pub-id></citation></ref>
<ref id="B9"><label>9.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Myronenko</surname> <given-names>A</given-names></name></person-group>. <article-title>3D MRI brain tumor segmentation using autoencoder regularization.</article-title> <source><italic>arXiv</italic></source> [<comment>Preprint</comment>]. (<year>2018</year>).</citation></ref>
<ref id="B10"><label>10.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>G</given-names></name> <name><surname>Li</surname> <given-names>W</given-names></name> <name><surname>Ourselin</surname> <given-names>S</given-names></name> <name><surname>Vercauteren</surname> <given-names>T</given-names></name></person-group>. <article-title>Automatic brain tumor segmentation based on cascaded convolutional neural networks with uncertainty estimation.</article-title> <source><italic>Front Comput Neurosci.</italic></source> (<year>2019</year>) <volume>13</volume>:<issue>56</issue>. <pub-id pub-id-type="doi">10.3389/fncom.2019.00056</pub-id></citation></ref>
<ref id="B11"><label>11.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Soltaninejad</surname> <given-names>M</given-names></name> <name><surname>Yang</surname> <given-names>G</given-names></name> <name><surname>Lambrou</surname> <given-names>T</given-names></name> <name><surname>Allinson</surname> <given-names>N</given-names></name> <name><surname>Jones</surname> <given-names>TL</given-names></name> <name><surname>Barrick</surname> <given-names>TR</given-names></name><etal/></person-group> <article-title>Supervised learning-based multimodal MRI brain tumor segmentation using texture features from supervoxels.</article-title> <source><italic>Comput. Methods Prog. Biomed.</italic></source> (<year>2018</year>) <volume>157</volume>:<fpage>69</fpage>&#x2013;<lpage>84</lpage>. <pub-id pub-id-type="doi">10.1016/j.cmpb.2018.01.003</pub-id></citation></ref>
<ref id="B12"><label>12.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lyu</surname> <given-names>C</given-names></name> <name><surname>Shu</surname> <given-names>H</given-names></name></person-group>. <article-title>A two-stage cascade model with variational autoencoders and attention gates for MRI brain tumor segmentation.</article-title> <source><italic>Brainlesion.</italic></source> (<year>2021</year>):<fpage>435</fpage>&#x2013;<lpage>47</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-030-72084-1_39</pub-id></citation></ref>
<ref id="B13"><label>13.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Iitm</surname> <given-names>N.</given-names></name></person-group> <source><italic>Segmentation of brain tumors from MRI using deep learning.</italic></source> (<year>2019</year>). Available online at: <ext-link ext-link-type="uri" xlink:href="https://www.youtube.com/watch?v=PcNqAVNCZrE&#x0026;t=331s">https://www.youtube.com/watch?v=PcNqAVNCZrE&#x0026;t=331s</ext-link></citation></ref>
<ref id="B14"><label>14.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jadon</surname> <given-names>S</given-names></name></person-group>. <article-title>A survey of loss functions for semantic segmentation. In 2020 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB).</article-title> <source><italic>Proceedings of the 2020 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB).</italic></source> <publisher-loc>Piscataway, NJ</publisher-loc>: <publisher-name>IEEE</publisher-name> (<year>2020</year>). <pub-id pub-id-type="doi">10.1109/cibcb48159.2020.9277638</pub-id></citation></ref>
<ref id="B15"><label>15.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tiu</surname> <given-names>E.</given-names></name></person-group> <source><italic>Metrics to evaluate your semantic segmentation model. [online] Medium.</italic></source> (<year>2021</year>). Available online at: <ext-link ext-link-type="uri" xlink:href="https://www.youtube.com/watch?v=PcNqAVNCZrE&#x0026;t=331s">https://www.youtube.com/watch?v=PcNqAVNCZrE&#x0026;t=331s</ext-link> (<comment>accessed November 13, 2021</comment>).</citation></ref>
<ref id="B16"><label>16.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Micikevicius</surname> <given-names>P</given-names></name> <name><surname>Narang</surname> <given-names>S</given-names></name> <name><surname>Alben</surname> <given-names>J</given-names></name> <name><surname>Gregory</surname> <given-names>FD</given-names></name> <name><surname>Elsen</surname> <given-names>E</given-names></name> <name><surname>Garcia</surname> <given-names>D</given-names></name><etal/></person-group> <article-title>Mixed precision training.</article-title> <source><italic>arXiv</italic></source> [<comment>Preprint</comment>]. (<year>2017</year>).</citation></ref>
<ref id="B17"><label>17.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Spyridon</surname> <given-names>B</given-names></name> <name><surname>Mauricio</surname> <given-names>R</given-names></name> <name><surname>Andras</surname> <given-names>J</given-names></name> <name><surname>Tefan Bauer</surname> <given-names>S</given-names></name> <name><surname>Markus</surname> <given-names>R</given-names></name> <name><surname>Alessandro</surname> <given-names>C</given-names></name><etal/></person-group> <source><italic>Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge.</italic></source> <publisher-loc>Ithaca, NY</publisher-loc>: <publisher-name>Cornell University</publisher-name> (<year>2019</year>).</citation></ref>
<ref id="B18"><label>18.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Badrinarayanan</surname> <given-names>V</given-names></name> <name><surname>Kendall</surname> <given-names>A</given-names></name> <name><surname>Cipolla</surname> <given-names>R</given-names></name></person-group>. <article-title>SegNet: A deep convolutional encoder-decoder architecture for image segmentation.</article-title> <source><italic>IEEE Trans Pattern Anal Mach Intell.</italic></source> (<year>2017</year>) <volume>39</volume>:<fpage>2481</fpage>&#x2013;<lpage>95</lpage>. <pub-id pub-id-type="doi">10.1109/tpami.2016.2644615</pub-id></citation></ref>
</ref-list>
</back>
</article>
