<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="review-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Bohr. Scit.</journal-id>
<journal-title>BOHR International Journal of Smart Computing and Information Technology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Bohr. Scit.</abbrev-journal-title>
<issn pub-type="epub">2583-2026</issn>
<publisher>
<publisher-name>BOHR</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.54646/bijscit.2022.26</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Review</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>CNN-based plastic waste detection system</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Aridj</surname> <given-names>Grourou Aya</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Saher</surname> <given-names>Louchen</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Larbi</surname> <given-names>Guezouli</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>University of Batna 2</institution>, <addr-line>Batna</addr-line>, <country>Algeria</country></aff>
<aff id="aff2"><sup>2</sup><institution>LEREESI, Higher National School of Renewable Energies, Environment and Sustainable Development</institution>, <addr-line>Batna</addr-line>, <country>Algeria</country></aff>
<author-notes>
<corresp id="c001">&#x002A;Correspondence: Guezouli Larbi, <email>larbi.guezouli@hns-re2sd.dz</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>30</day>
<month>07</month>
<year>2022</year>
</pub-date>
<volume>3</volume>
<issue>1</issue>
<fpage>43</fpage>
<lpage>49</lpage>
<history>
<date date-type="received">
<day>18</day>
<month>06</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>08</day>
<month>07</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2022 Aya Aridj, Saher and Larbi.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Aya Aridj, Saher and Larbi</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by-nc-nd/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>Plastic waste has become a pressing global concern in recent decades, posing significant challenges to our environment due to its non-biodegradable nature and causing significant pollution and damage to our planet. Recycling plastic waste is one of the most effective solutions to this dilemma, which is why the aim of our project was to create a system that detects plastic waste using a large dataset with labeled data and one of the most famous deep learning neural networks, &#x201C;Convolutional Neural Networks,&#x201D; to classify and speed up the waste collection process and provide an easier recycling process. Thanks to our work, we have achieved 97% accuracy.</p>
</abstract>
<kwd-group>
<kwd>plastic waste detection</kwd>
<kwd>Convolutional Neural Networks (CNN)</kwd>
<kwd>deep learning (DL)</kwd>
<kwd>real-time</kwd>
<kwd>waste management</kwd>
</kwd-group>
<counts>
<fig-count count="10"/>
<table-count count="1"/>
<equation-count count="7"/>
<ref-count count="9"/>
<page-count count="7"/>
<word-count count="3162"/>
</counts>
</article-meta>
</front>
<body>
<sec id="S1" sec-type="intro">
<title>Introduction</title>
<p>Plastic waste is a major environmental problem. According to the latest statistics, the annual global production of plastic waste is over 300 million tons (<xref ref-type="bibr" rid="B1">1</xref>), and due to its non-biodegradable nature, plastic waste accumulates in landfills for hundreds of years, causing pollution, water contamination, and life-threatening problems for all creatures.</p>
<p>The plastic recycling process can be a big help in reducing the harm caused by plastic waste. Several new technologies are being developed as well, such as pyrolysis, which can be used to convert plastic waste into oil and gas, and depolymerization, which can be used to break down plastic into its original components.</p>
<p>To facilitate the implementation and realization of these technologies, we propose a system based on artificial intelligence that detects plastic waste using deep learning&#x2019;s commonly used model the convolutional NeuralNetwork (CNN).</p>
<p>The detection of waste requires precision and sharpness in many factors to obtain the best results.</p>
<p>A comparison study was performed by Yang and Thung (<xref ref-type="bibr" rid="B2">2</xref>) to help with the recycling process using computer vision and predicting waste type only by vision. The reliance on hand-collected data in their project entitled &#x201C;Classification of Trash for Recyclability Status&#x201D; raises concerns about the dataset&#x2019;s representativeness and potential biases. It is crucial to ensure that the dataset encompasses a diverse range of waste materials, considering that waste items can vary significantly in appearance, shape, and condition. It is worth exploring alternative feature extraction techniques and optimizing the CNN architecture to potentially enhance the models&#x2019; predictive capabilities.</p>
<p>As seen in the Intelligent Waste Separator project (<xref ref-type="bibr" rid="B3">3</xref>), when K-Nearest Neighbor (KNN) and Health information management (HIM) methods were implemented, they relied on the points surrounding the shape of the object, which prevented the system from detecting deformed waste. This can be a major deficiency, hence the waste generally comes in multiple shapes and conditions.</p>
<p>Arghadeep Mitra (<xref ref-type="bibr" rid="B4">4</xref>) stated that &#x201C;The main issue of this project was the dataset which includes images that are slightly different from local waste materials,&#x201D; which caused the model to predict some of the images wrongly. Therefore, using a collection of datasets collected from actual dirty, real places and images containing multiple objects in his training process might have boosted his model&#x2019;s ability to predict better.</p>
<p>The detection of waste demands attention to detail and precision across various factors. For example, the limitations faced in the previously shown works, underscore the importance of employing advanced techniques and comprehensive datasets to achieve better prediction results.</p>
<p>The manuscript is organized as follows:</p>
<p>After a general introduction, subsequent sections discuss the plastic waste sorting systems and validation of the proposed work. Finally, a general conclusion is given.</p>
</sec>
<sec id="S2">
<title>Plastic waste sorting system</title>
<p>The creation of our system involved several distinct steps, as illustrated in the schema in <xref ref-type="fig" rid="F1">Figure 1</xref>. The process began by feeding the model with the pre-processed dataset. The model is then trained using the given dataset, where the parameters of the CNN layers are updated. After the training phase, the model moves on to the testing phase, where it predicts whether the objects captured by the camera are plastics or other types of waste.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption><p>Plastic waste sorting process.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2022-26-g001.tif"/>
</fig>
<sec id="S2.SS1">
<title>Dataset pre-processing</title>
<p>In general, to process an image, you need to go through a pre-processing stage where the image is cleaned. Pre-processing consists of operations that the image must undergo. The aim is to facilitate the use of the image in more complex operations based on very precise image characteristics.</p>
<p>Our dataset has passed through some pre-processing steps to achieve the best training results, as shown in <xref ref-type="fig" rid="F2">Figure 2</xref>.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption><p>Image pre-processing.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2022-26-g002.tif"/>
</fig>
<sec id="S2.SS1.SSS1">
<title>Segmentation</title>
<p>A method of dividing a digital image into subgroups is called image segmentation. This can reduce the complexity of the image, making it easier to analyze.</p>
<p>Segmentation algorithms are used to group pixels in an image. These groups are labeled to identify the objects that make them up (<xref ref-type="bibr" rid="B5">5</xref>).</p>
</sec>
<sec id="S2.SS1.SSS2">
<title>Gray-scale</title>
<p>Also known as black-and-white or monochrome image, gray-scale image refers to an image or representation that consists solely of shades of gray, ranging from pure white to pure black.</p>
<p>Unlike color images, gray-scale images do not contain any color information and only represent the brightness or luminance values of the pixels, as shown in <xref ref-type="fig" rid="F3">Figure 3</xref>. This offers a simplified and focused representation of visual data, allowing for specific analyses and ensuring accessibility in various contexts (<xref ref-type="bibr" rid="B6">6</xref>).</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption><p>A color photo converted to gray-scale image.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2022-26-g003.tif"/>
</fig>
</sec>
<sec id="S2.SS1.SSS3">
<title>Image thresholding</title>
<p>This is a technique used in image processing to convert a gray-scale or color image into a binary image, as shown in <xref ref-type="fig" rid="F4">Figure 4</xref>.</p>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption><p>A gray-scale image after thresholding.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2022-26-g004.tif"/>
</fig>
<p>The goal of thresholding is to separate objects or regions of interest from the background based on their pixel intensities to aid in image processing (<xref ref-type="bibr" rid="B7">7</xref>).</p>
</sec>
<sec id="S2.SS1.SSS4">
<title>Contour detection</title>
<p>First of all, through a simple code, we extracted the contours existing in each image of the dataset by following Canny edge detection algorithm. Edge detection is a computational technique that utilizes mathematical methods to identify locations in an image where there are noticeable changes in pixel intensity.</p>
<p>Canny is a filter that refers to a famous multi-step algorithm to detect the edges of any input image, which was developed by John F. Canny in 1986 (<xref ref-type="bibr" rid="B8">8</xref>).</p>
<p>The Canny edge detection algorithm is composed of the following steps:</p>
<list list-type="simple">
<list-item>
<label>&#x2022;</label>
<p>Noise reduction: Apply a Gaussian blur to the image to reduce noise and smooth out irregularities.</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p>Gradient calculation: Calculate the gradient magnitude and orientation for each pixel in the blurred image.</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p>Non-maximum suppression: Suppress non-maximum gradient values by thinning the edges to a single pixel thickness.</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p>Thresholding: Apply thresholding to classify the remaining pixels as either strong edges, weak edges, or non-edges based on gradient magnitudes.</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p>Edge tracking by hysteresis: Connecting weak edges to strong edges, thereby forming complete edge contours. Weak edges that are not connected to any strong edges are disregarded.</p>
</list-item>
</list>
<p>Following the application of Canny&#x2019;s filter, we proceeded to transform the detected contour points into a tabular representation stored as a CSV (Comma-Separated Values) file. Subsequently, the CSV file was further processed to generate binary images through another simple code, exclusively representing the shapes present in each image. As a result, our dataset assumed the structure illustrated in the aforementioned schema.</p>
<p>This representation can facilitate feature extraction, aid in model interpretation, reduce dimensionality, and enhance the training process for our model, as shown in <xref ref-type="fig" rid="F5">Figure 5</xref>.</p>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption><p>Canny edge detection.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2022-26-g005.tif"/>
</fig>
</sec>
</sec>
<sec id="S2.SS2">
<title>Plastic waste detection network (PWDN)</title>
<p>The Plastic Waste Detection Network (PWDN) is a CNN. This is the core of our proposal.</p>
<p>We employed a sequential CNN architecture, as presented in <xref ref-type="fig" rid="F6">Figure 6</xref>, where we used three convolutional blocks and, at the end, two dense layers.</p>
<fig id="F6" position="float">
<label>FIGURE 6</label>
<caption><p>Proposed CNN model (PWDN).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2022-26-g006.tif"/>
</fig>
<p>The architecture of the proposed model consists of an initial convolution layer with 32 filters using stride&#x00B0; = &#x00B0;2 and a core of size 5<sup>&#x00B0;</sup>&#x00D7;<sup>&#x00B0;</sup>5. This layer is followed by two residual bottleneck layers. A flattening and a dense layer are then activated with a ReLU activation function. At the end, another dense layer with a sigmoid activation function is used for classification.</p>
<p>To make the bottleneck robust, we use the ReLU activation function as a nonlinearity with a 3<sup>&#x00B0;</sup>&#x00D7;<sup>&#x00B0;</sup>3 kernel, and then a batch normalization is used during training.</p>
<p><xref ref-type="fig" rid="F6">Figure 6</xref> and <xref ref-type="table" rid="T1">Table 1</xref> describe each layer according to the following characteristics: input size, operator applied, filter size if necessary, stride if necessary, and output size.</p>
<table-wrap position="float" id="T1">
<label>TABLE 1</label>
<caption><p>Architecture of the proposed model (PWDN).</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td/>
<td valign="top" align="left">Layer</td>
<td valign="top" align="center">Input</td>
<td valign="top" align="center">Stride</td>
<td valign="top" align="center">Filter</td>
<td valign="top" align="center">Output</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Features extraction layers</td>
<td valign="top" align="left">Conv2D</td>
<td valign="top" align="center">250&#x00B0;&#x00D7;&#x00B0;250&#x00B0;&#x00D7;&#x00B0;3</td>
<td valign="top" align="center">2</td>
<td valign="top" align="center">5&#x00B0;&#x00D7;&#x00B0;5&#x00B0;&#x00D7;&#x00B0;32</td>
<td valign="top" align="center">123&#x00B0;&#x00D7;&#x00B0;123&#x00B0;&#x00D7;&#x00B0;32</td>
</tr>
<tr>
<td valign="top" align="left"/><td valign="top" align="left">BatchNormalization</td>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td valign="top" align="left"/><td valign="top" align="left">(axis = 3)</td>
<td valign="top" align="center">123&#x00B0;&#x00D7;&#x00B0;123&#x00B0;&#x00D7;&#x00B0;32</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">123&#x00B0;&#x00D7;&#x00B0;123&#x00B0;&#x00D7;&#x00B0;32</td>
</tr>
<tr>
<td valign="top" align="left"/><td valign="top" align="left">ReLu</td>
<td valign="top" align="center">123&#x00B0;&#x00D7; 123&#x00B0;&#x00D7;&#x00B0;32</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">123&#x00B0;&#x00D7;&#x00B0;123&#x00B0;&#x00D7;&#x00B0;32</td>
</tr>
<tr>
<td valign="top" align="left"/><td valign="top" align="left">Conv2D</td>
<td valign="top" align="center">123&#x00B0;&#x00D7; 123&#x00B0;&#x00D7;&#x00B0;32</td>
<td valign="top" align="center">1</td>
<td valign="top" align="center">3&#x00B0;&#x00D7;&#x00B0;3&#x00B0;&#x00D7;&#x00B0;64</td>
<td valign="top" align="center">121&#x00B0;&#x00D7;&#x00B0;121&#x00B0;&#x00D7;&#x00B0;64</td>
</tr>
<tr>
<td valign="top" align="left"/><td valign="top" align="left">BatchNormalization</td>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td valign="top" align="left"/><td valign="top" align="left">(axis = 3)</td>
<td valign="top" align="center">121&#x00B0;&#x00D7; 121&#x00B0;&#x00D7;&#x00B0;64</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">121&#x00B0;&#x00D7;&#x00B0;121&#x00B0;&#x00D7;&#x00B0;64</td>
</tr>
<tr>
<td valign="top" align="left"/><td valign="top" align="left">ReLu</td>
<td valign="top" align="center">121&#x00B0;&#x00D7; 121&#x00B0;&#x00D7;&#x00B0;64</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">121&#x00B0;&#x00D7;&#x00B0;121&#x00B0;&#x00D7;&#x00B0;64</td>
</tr>
<tr>
<td valign="top" align="left"/><td valign="top" align="left">MaxPooling2D</td>
<td valign="top" align="center">121&#x00B0;&#x00D7; 121&#x00B0;&#x00D7;&#x00B0;64</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">3&#x00B0;&#x00D7;&#x00B0;3</td>
<td valign="top" align="center">40&#x00B0;&#x00D7;&#x00B0;40&#x00B0;&#x00D7;&#x00B0;64</td>
</tr>
<tr>
<td valign="top" align="left"/><td valign="top" align="left">Conv2D</td>
<td valign="top" align="center">40&#x00B0;&#x00D7; 40&#x00B0;&#x00D7;&#x00B0;64</td>
<td valign="top" align="center">1</td>
<td valign="top" align="center">3&#x00B0;&#x00D7;&#x00B0;3&#x00B0;&#x00D7;&#x00B0;128</td>
<td valign="top" align="center">38&#x00B0;&#x00D7;&#x00B0;38&#x00B0;&#x00D7;&#x00B0;128</td>
</tr>
<tr>
<td valign="top" align="left"/><td valign="top" align="left">BatchNormalization</td>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td valign="top" align="left"/><td valign="top" align="left">(axis = 3)</td>
<td valign="top" align="center">38&#x00B0;&#x00D7;&#x00B0;38&#x00B0;&#x00D7;&#x00B0;128</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">38&#x00B0;&#x00D7;&#x00B0;38&#x00B0;&#x00D7;&#x00B0;128</td>
</tr>
<tr>
<td valign="top" align="left"/><td valign="top" align="left">ReLu</td>
<td valign="top" align="center">38&#x00B0;&#x00D7;&#x00B0;38&#x00B0;&#x00D7;&#x00B0;128</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">38&#x00B0;&#x00D7;&#x00B0;38&#x00B0;&#x00D7;&#x00B0;128</td>
</tr>
<tr>
<td valign="top" align="left"/><td valign="top" align="left">MaxPooling2D</td>
<td valign="top" align="center">38&#x00B0;&#x00D7;&#x00B0;38&#x00B0;&#x00D7;&#x00B0;128</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">3&#x00B0;&#x00D7;&#x00B0;3</td>
<td valign="top" align="center">12&#x00B0;&#x00D7;&#x00B0;12&#x00B0;&#x00D7;&#x00B0;128</td>
</tr>
<tr>
<td valign="top" align="left">Classification layers</td>
<td valign="top" align="left">Flatten</td>
<td valign="top" align="center">12&#x00B0;&#x00D7;&#x00B0;12&#x00B0;&#x00D7;&#x00B0;128</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">1&#x00B0;&#x00D7;&#x00B0;1&#x00B0;&#x00D7;&#x00B0;18432</td>
</tr>
<tr>
<td valign="top" align="left"/><td valign="top" align="left">Dense</td>
<td valign="top" align="center">1&#x00B0;&#x00D7;&#x00B0;1&#x00B0;&#x00D7;&#x00B0;18432</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">1&#x00B0;&#x00D7;&#x00B0;1&#x00B0;&#x00D7;&#x00B0;128</td>
</tr>
<tr>
<td valign="top" align="left"/><td valign="top" align="left">ReLu</td>
<td valign="top" align="center">1&#x00B0;&#x00D7;&#x00B0;1&#x00B0;&#x00D7;&#x00B0;128</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">1&#x00B0;&#x00D7;&#x00B0;1&#x00B0;&#x00D7;&#x00B0;128</td>
</tr>
<tr>
<td valign="top" align="left"/><td valign="top" align="left">Dense</td>
<td valign="top" align="center">1&#x00B0;&#x00D7;&#x00B0;1&#x00B0;&#x00D7;&#x00B0;128</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">1&#x00B0;&#x00D7;&#x00B0;1&#x00B0;&#x00D7;&#x00B0;1</td>
</tr>
<tr>
<td valign="top" align="left"/><td valign="top" align="left">Sigmoid</td>
<td valign="top" align="center">1&#x00B0;&#x00D7;&#x00B0;1&#x00B0;&#x00D7;&#x00B0;1</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">1&#x00B0;&#x00D7;&#x00B0;1&#x00B0;&#x00D7;&#x00B0;1</td>
</tr>
</tbody>
</table></table-wrap>
</sec>
<sec id="S2.SS3">
<title>Plastic detection</title>
<p>The data used for testing follows the same steps as for data pre-processing, with an additional step to extract region of interest (ROI).</p>
<p>ROI refers to a specific area or region within an image that is of particular interest for further analysis. In our context, the ROI corresponds to the portion of the image that potentially contains plastic waste objects, as shown in <xref ref-type="fig" rid="F7">Figure 7 (9</xref>).</p>
<fig id="F7" position="float">
<label>FIGURE 7</label>
<caption><p>Region of interest bounded by green color.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2022-26-g007.tif"/>
</fig>
<p>After applying all the above algorithms, the images are then standardized using <italic>test_datagenerator</italic>, which prepares the images for prediction by applying the necessary transformations and pre-processing.</p>
<p>The pre-trained model then applies its learned parameters and architecture to analyze the images and generate predictions.</p>
</sec>
</sec>
<sec id="S3">
<title>Test and validation</title>
<p>The ability to accurately identify plastic waste items holds great significance in combating environmental degradation.</p>
<p>This section includes a detailed exploration of the accuracy and loss curves that provide valuable insights into the system&#x2019;s learning process.</p>
<p>In addition, we delve into the evaluation metrics and performance analysis of the system. We present an in-depth examination of key metrics: accuracy, precision, recall, F1 score, and receiver operating characteristic (ROC) curve, and area under the ROC curve (AUC). These metrics serve as quantitative measures of the system&#x2019;s effectiveness in distinguishing between plastic waste and non-plastic waste items.</p>
<sec id="S3.SS1">
<title>Training results</title>
<p>After training our model on 15 epochs, we found that the training accuracy increased more rapidly than the validation accuracy as the number of epochs progresses, as shown in <xref ref-type="fig" rid="F8">Figure 8</xref>. Consequently, the training loss decreases at a faster rate than the validation loss. This observation reflects the fact that the model acquires more information with each iteration. Initially, the model demonstrates significant growth, but over time it reaches a plateau, indicating that it can no longer learn further.</p>
<fig id="F8" position="float">
<label>FIGURE 8</label>
<caption><p>Accuracy and loss curves.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2022-26-g008.tif"/>
</fig>
</sec>
<sec id="S3.SS2">
<title>Evaluation results</title>
<sec id="S3.SS2.SSS1">
<title>Confusion matrix</title>
<p>The confusion matrix shows the number of true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN) predicted by our model (<xref ref-type="fig" rid="F9">Figure 9</xref>).</p>
<fig id="F9" position="float">
<label>FIGURE 9</label>
<caption><p>Confusion matrix.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2022-26-g009.tif"/>
</fig>
<list list-type="simple">
<list-item>
<label>&#x2022;</label>
<p>True Negative (0, 0): The model correctly predicted the negative class, which represents 49.84% of the total number of instances. This indicates that our model effectively identifies and classifies instances that do not belong to the positive class.</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p>False Positive (0, 1): The model incorrectly predicted the positive class for 0.62% of instances that actually belonged to the negative class. This suggests that there is a small fraction of instances for which our model generated false alarms or incorrectly identified positive instances.</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p>False Negative (1, 0): The model incorrectly predicted the negative class for 1.56% of instances that actually belonged to the positive class. This indicates that there is a small proportion of instances for which our model failed to identify the positive class.</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p>True Positive (1, 1): The model correctly predicted the positive class in 47.98% of cases, indicating its ability to accurately identify positive cases.</p>
</list-item>
</list>
</sec>
<sec id="S3.SS2.SSS2">
<title>Accuracy score</title>
<p>This metric measures the ratio of correctly predicted positive cases to the total number of cases predicted as positive. It is used to assess the model&#x2019;s ability to minimize false positives.</p>
<disp-formula id="S3.Ex2"><mml:math id="M1">
<mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>u</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:mo>=</mml:mo>
<mml:mpadded width="+3.3pt">
<mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:mn>160</mml:mn>
<mml:mo>+</mml:mo>
<mml:mn>154</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>160</mml:mn>
<mml:mo>+</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>+</mml:mo>
<mml:mn>154</mml:mn>
<mml:mo>+</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
</mml:mpadded>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mn>0.978193</mml:mn>
</mml:mrow>
</mml:math>
</disp-formula>
</sec>
<sec id="S3.SS2.SSS3">
<title>Precision</title>
<p>This metric measures the ratio of correctly predicted positive cases to the total number of cases predicted as positive. It is used to assess the model&#x2019;s ability to minimize false positives.</p>
<disp-formula id="S3.Ex3"><mml:math id="M7">
<mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>e</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>o</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mpadded width="+3.3pt">
<mml:mi>n</mml:mi>
</mml:mpadded>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mpadded width="+3.3pt">
<mml:mfrac>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mpadded>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mpadded width="+3.3pt">
<mml:mfrac>
<mml:mn>154</mml:mn>
<mml:mrow>
<mml:mn>154</mml:mn>
<mml:mo>+</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mpadded>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mn>0.968553</mml:mn>
</mml:mrow>
</mml:math>
</disp-formula>
</sec>
<sec id="S3.SS2.SSS4">
<title>Recall</title>
<p>It is used to evaluate our model&#x2019;s ability to minimize false negatives.</p>
<disp-formula id="S3.Ex4"><mml:math id="M8">
<mml:mrow>
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>e</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mpadded width="+3.3pt">
<mml:mi>l</mml:mi>
</mml:mpadded>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mpadded width="+3.3pt">
<mml:mfrac>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mpadded>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mpadded width="+3.3pt">
<mml:mfrac>
<mml:mn>154</mml:mn>
<mml:mrow>
<mml:mn>154</mml:mn>
<mml:mo>+</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mpadded>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mn>0.987179</mml:mn>
</mml:mrow>
</mml:math>
</disp-formula>
</sec>
<sec id="S3.SS2.SSS5">
<title>F1 score</title>
<p>Simply, it combines both precision and recall into a single metric providing a balance between precision and recall and is calculated as the harmonic mean of the two metrics.</p>
<disp-formula id="S3.Ex5"><mml:math id="M9">
<mml:mrow>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mpadded width="+5pt">
<mml:mn>1</mml:mn>
</mml:mpadded>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>o</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mpadded width="+3.3pt">
<mml:mi>e</mml:mi>
</mml:mpadded>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:mn>2</mml:mn>
</mml:mpadded>
<mml:mo rspace="5.8pt">&#x00D7;</mml:mo>
<mml:mpadded width="+3.3pt">
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>e</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>o</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mpadded width="+3.3pt">
<mml:mi>n</mml:mi>
</mml:mpadded>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>e</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>l</mml:mi>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>e</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>o</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>e</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>l</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mpadded>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mn>0.97778</mml:mn>
</mml:mrow>
</mml:math>
</disp-formula>
</sec>
<sec id="S3.SS2.SSS6">
<title>ROC AUC score</title>
<p>This metric is used for binary classification problems.</p>
<p>ROC curve is a plot of the false positive rate (x-axis) versus the true positive rate (y-axis) at various relevance threshold (or cutoff) settings.</p>
<sec id="S3.SS2.SSS6.Px1">
<title>ROC curve</title>
<p>It is a graph showing the performance of a classification model at all classification thresholds plotting the true positive rate (recall) against the false positive rate.</p>
<disp-formula id="S3.Ex6"><mml:math id="M10">
<mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>u</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mpadded width="+5pt">
<mml:mi>e</mml:mi>
</mml:mpadded>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>P</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>o</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mpadded width="+5pt">
<mml:mi>e</mml:mi>
</mml:mpadded>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mpadded width="+3.3pt">
<mml:mi>e</mml:mi>
</mml:mpadded>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>e</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>n</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mpadded width="+3.3pt">
<mml:mi>y</mml:mi>
</mml:mpadded>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mrow>
<mml:mi>r</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>e</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mpadded width="+3.3pt">
<mml:mi>l</mml:mi>
</mml:mpadded>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="S3.Ex7"><mml:math id="M11">
<mml:mrow>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mpadded width="+5pt">
<mml:mi>e</mml:mi>
</mml:mpadded>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>P</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>o</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mpadded width="+5pt">
<mml:mi>e</mml:mi>
</mml:mpadded>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mpadded width="+3.3pt">
<mml:mi>e</mml:mi>
</mml:mpadded>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>-</mml:mo>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>p</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>e</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>f</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mpadded width="+3.3pt">
<mml:mi>y</mml:mi>
</mml:mpadded>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
</sec>
<sec id="S3.SS2.SSS6.Px2">
<title>AUC (Area under the ROC curve)</title>
<p>It is a numerical measure of the performance of our classification model. It represents the area under the ROC curve and ranges from 0 to 1. A higher AUC value indicates a better-performing model, with 1 being a perfect classifier and 0.5 representing a random classifier.</p>
<p>Our model reached a ROC AUC value of 0.997669 (<xref ref-type="fig" rid="F10">Figure 10</xref>).</p>
<fig id="F10" position="float">
<label>FIGURE 10</label>
<caption><p>Receiver operating characteristic (ROC) Curve and AUC.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2022-26-g010.tif"/>
</fig>
<p>In conclusion, the evaluation of our CNN model yielded exceptional performance across various metrics. Our model consistently achieved accurate predictions, showcasing its effectiveness in correctly classifying the target outcomes. Our system represents a significant step toward a more sustainable and environmentally conscious future. The insights gained from this evaluation serve as a foundation for further research and development in the field, driving us closer to effective solutions for plastic waste reduction and mitigation.</p>
</sec>
</sec>
</sec>
</sec>
<sec id="S4" sec-type="conclusion">
<title>Conclusion</title>
<p>To conclude this work, we presented a comprehensive study on the development of a CNN model for plastic waste detection that offers promising solutions to the challenges posed by plastic waste management. By leveraging the capabilities of artificial intelligence and computer vision, our research contributes to sustainable waste management practices and paves the way for future advancements in the field.</p>
<p>The proposed approach combines several computer vision algorithms and deep learning methods. The system&#x2019;s architecture and evaluation were well explained in the previous chapters along with all the theoretical aspects</p>
<p>needed to foster a deeper comprehension of the system&#x2019;s intricacies and enable readers to critically analyze the obtained results.</p>
<p>Various measurement metrics were used to demonstrate our model&#x2019;s effectiveness and were carefully selected to provide comprehensive insights into different aspects of our model&#x2019;s capabilities.</p>
<p>While our binary classification system for plastic detection has shown promising results, there are several exciting areas for future research and development. By incorporating real-time object detection using a camera feed, data augmentation, and integrating with waste management systems, multiple classes of materials, such as plastic, paper, glass, and metal, are considered to be added in the future as we can provide users with a more comprehensive solution for waste sorting and recycling. Our system can be enhanced to address evolving challenges in plastic detection and contribute to sustainable waste management practices.</p>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="B1"><label>1.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ritchie</surname> <given-names>H</given-names></name> <name><surname>Roser</surname> <given-names>M.</given-names></name></person-group> <source><italic>Plastic Pollution.</italic></source> <publisher-loc>Oxford</publisher-loc>: <publisher-name>Our World in Data</publisher-name> (<year>2018</year>).</citation></ref>
<ref id="B2"><label>2.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>M</given-names></name> <name><surname>Thung</surname> <given-names>G.</given-names></name></person-group> <source><italic>Classification of Trash for Recyclability Status.</italic> CS229 project report</source>. (<volume>Vol. 1</volume>). (<year>2016</year>).</citation></ref>
<ref id="B3"><label>3.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Torres-Garc&#x00EF;a</surname> <given-names>A</given-names></name> <name><surname>Rodea-Aragon</surname> <given-names>O</given-names></name> <name><surname>Longoria-Gandara</surname> <given-names>O</given-names></name> <name><surname>Sanchez-Garcia</surname> <given-names>F</given-names></name> <name><surname>Gonzalez-Jimenez</surname> <given-names>LE</given-names></name></person-group>. <article-title>Intelligent waste separator.</article-title> <source><italic>Computacion Sistemas.</italic></source> (<year>2015</year>) <volume>19</volume>:<fpage>487</fpage>&#x2013;<lpage>500</lpage>.</citation></ref>
<ref id="B4"><label>4.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mittal</surname> <given-names>G</given-names></name> <name><surname>Kaushal</surname> <given-names>BY</given-names></name> <name><surname>Mohit</surname> <given-names>G</given-names></name> <name><surname>Narayanan</surname> <given-names>CK</given-names></name></person-group>. <article-title>Spotgarbage: smartphone app to detect garbage using deep learning.</article-title> <source><italic>Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing.</italic></source> <publisher-loc>New York, NY</publisher-loc>: (<year>2016</year>). p. <fpage>940</fpage>&#x2013;<lpage>5</lpage>.</citation></ref>
<ref id="B5"><label>5.</label><citation citation-type="journal"><collab>Datagen.</collab><source><italic>Image Segmentation.</italic></source> (<year>2023</year>). Available online at: <ext-link ext-link-type="uri" xlink:href="https://datagen.tech/guides/image-annotation/image-segmentation/">https://datagen.tech/guides/image-annotation/image-segmentation/</ext-link> (accessed July 02, 2023).</citation></ref>
<ref id="B6"><label>6.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fisher</surname> <given-names>R</given-names></name> <name><surname>Perkins</surname> <given-names>A.</given-names></name></person-group> <source><italic>Grayscale Images.</italic></source> (<year>2000</year>). Available online at: <ext-link ext-link-type="uri" xlink:href="https://homepages.inf.ed.ac.uk/rbf/HIPR2/gryimage.htm">https://homepages.inf.ed.ac.uk/rbf/HIPR2/gryimage.htm</ext-link> (accessed July 02, 2023).</citation></ref>
<ref id="B7"><label>7.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Geospatial</surname> <given-names>LH.</given-names></name></person-group> <source><italic>Image Thresholding.</italic></source> (<year>2023</year>). Available online at: <ext-link ext-link-type="uri" xlink:href="https://www.l3harrisgeospatial.com/docs/ImageThresholding.html">https:// www.l3harrisgeospatial.com/docs/ImageThresholding.html</ext-link> (accessed July 02, 2023)</citation></ref>
<ref id="B8"><label>8.</label><citation citation-type="journal"><collab>Towards AI.</collab><source><italic>What is a Canny Edge Detection Algorithm?</italic></source> (<year>2023</year>). <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Towards AI</publisher-name>.</citation></ref>
<ref id="B9"><label>9.</label><citation citation-type="journal"><collab>SmartRay.</collab><source><italic>Region of Interest (ROI).</italic></source> (<year>2023</year>). Available online at: <ext-link ext-link-type="uri" xlink:href="https://www.smartray.com/glossary/region-of-interest-roi/">https://www.smartray.com/glossary/region-of-interest-roi/</ext-link> (accessed July 02, 2023).</citation></ref>
</ref-list>
</back>
</article>
