<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Bohr. Scit.</journal-id>
<journal-title>BOHR International Journal of Smart Computing and Information Technology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Bohr. Scit.</abbrev-journal-title>
<issn pub-type="epub">2583-2026</issn>
<publisher>
<publisher-name>BOHR</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.54646/bijscit.2022.27</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Dynamic Hough transform for robust lane detection and navigation in real time</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Hiremath</surname> <given-names>Shrikant</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Shreenidhi</surname> <given-names>B.</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Computer Science and Application, Government First Grade College for Women</institution>, <addr-line>Jamkhandi</addr-line>, <country>India</country></aff>
<aff id="aff2"><sup>2</sup><institution>Department of Computer Science, Karnataka Science College</institution>, <addr-line>Dharwad</addr-line>, <country>India</country></aff>
<author-notes>
<corresp id="c001">&#x002A;Correspondence: Shrikant Hiremath, <email>smswami21@gmail.com</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>25</day>
<month>08</month>
<year>2022</year>
</pub-date>
<volume>3</volume>
<issue>1</issue>
<fpage>50</fpage>
<lpage>58</lpage>
<history>
<date date-type="received">
<day>21</day>
<month>07</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>19</day>
<month>08</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2022 Hiremath and B.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Hiremath and B</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by-nc-nd/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>Traffic safety is enhanced by immediate lane-line monitoring and recognition in advanced driving assistance systems. A new method of recognizing and continuous monitoring lane lines using the Hough transform is proposed in this study. A vehicle is equipped with a camera that takes pictures of the road, which are then processed to enhance the visibility of the lane lines. Hough transforms applied to preprocessed images allow the system to recognize lane lines. In order to ensure continuous monitoring of lane lines, the Kalman filter has been used in the study. A comprehensive set of real-time driving scenarios is used to assess the performance of the proposed system in Python using OpenCV. The results of the trial demonstrate the system&#x2019;s viability and efficacy.</p>
</abstract>
<kwd-group>
<kwd>Advanced driving assistance systems (ADAS)</kwd>
<kwd>road lines detection</kwd>
<kwd>tracking</kwd>
<kwd>Hough transform</kwd>
<kwd>real-time</kwd>
<kwd>image processing techniques</kwd>
<kwd>camera</kwd>
<kwd>preprocessing</kwd>
<kwd>Kalman filter</kwd>
<kwd>Python</kwd>
<kwd>OpenCV</kwd>
</kwd-group>
<counts>
<fig-count count="14"/>
<table-count count="5"/>
<equation-count count="10"/>
<ref-count count="22"/>
<page-count count="9"/>
<word-count count="5910"/>
</counts>
</article-meta>
</front>
<body>
<sec id="S1" sec-type="intro">
<title>1. Introduction</title>
<p>Road markings must be detected and continuously monitored in advanced driving assistance systems (ADAS). Lane identification and monitoring significantly increase driving safety and convenience by warning drivers when they approach or cross lane boundaries. The Hough transform is commonly used as a preferred method for lane monitoring because of its reliability and precision. The Hough transform can precisely detect straight lines in image frames and thus precisely determine lane borders. Its computational efficiency makes it the best method for real-time lane monitoring applications.</p>
<p>The ADAS that is being suggested has four essential phases: lane departure warning, continuous detection and monitoring of road markings, image processing, and picture acquisition. Images of the road scene are taken with a camera during the image acquisition step. The collected images are then subjected to preprocessing in order to reduce noise, improve contrast, and convert them to grayscale during the subsequent stage of image processing. In order to find the straight lines that match the lane markings, the Hough transform is then used. The lane fitting algorithm determines the precise lane boundaries during the lane recognition and monitoring stages by fitting a polynomial curve to the lines that were observed. GPS and IMU sensors are used to determine the position and orientation of the vehicle. When the vehicle approaches or crosses lane lines, the lane departure warning system is also activated.</p>
<p>The suggested approach can be used on vehicles with little in the way of computational power and provides a generalized and continuous solution for lane monitoring. The method can precisely detect lane lines in a variety of lighting and weather conditions because it is built to be durable and dependable.</p>
<sec id="S1.SS1">
<title>1.1. Literature review</title>
<p>In recent years, numerous studies (<xref ref-type="bibr" rid="B1">1</xref>&#x2013;<xref ref-type="bibr" rid="B22">22</xref>) have contributed to the advancement of vision-based lane detection. &#x201C;Driver assistance system with continuous lane monitoring&#x201D; by Al Smadi et al.&#x2019;s proposal for circuits and systems 2014 (2014), as well as Kodeeswari and Daniel (<xref ref-type="bibr" rid="B1">1</xref>). &#x201C;Lane queue detection in real time for driver assistance system based on morphological operations.&#x201D; 2017&#x2019;s fourth ISPCC (Signal processing, computing, and control) conference produced by Kodeeswari et al. for Institute of Electrical and Electronics Engineers (IEEE), 2017 (<xref ref-type="bibr" rid="B2">2</xref>). &#x201C;Review of advanced driver assistance system monitoring and recognizing algorithms.&#x201D; Sustainability 13.20 (2021): 11417, Waykole et al. (<xref ref-type="bibr" rid="B3">3</xref>). &#x201C;Advanced driver assistance system with continuous human monitoring and recognizing using two sequential frames.&#x201D; The third international conference on informatics and computational sciences will be held in 2019, IEEE. It was created by Mulyanto et al. (<xref ref-type="bibr" rid="B4">4</xref>). &#x201C;An improved Hough transform is used in the algorithm study for lane discovering and navigation.&#x201D; Wei&#x2019;s proposal for the 2018 IEEE International conference on integrated automation and control engineering et al. (<xref ref-type="bibr" rid="B5">5</xref>). &#x201C;An effective method for detecting highway lanes based on the Kalman filter and the Hough transform.&#x201D; Innovative infrastructure improvements 290 proposed by Kumar et al. (<xref ref-type="bibr" rid="B6">6</xref>). &#x201C;Lane departure warning for better driving assistance.&#x201D; Gaikwad and Lokhande&#x2019;s description of IEEE transactions on intelligent transportation systems (<xref ref-type="bibr" rid="B7">7</xref>). &#x201C;Machine vision-based robust lane monitoring and recognition&#x201D; ZTE Communication, developed by Fan et al. (<xref ref-type="bibr" rid="B8">8</xref>). &#x201C;Framework for continuous lane recognition in steep routes based on image processing for driver aid systems,&#x201D; Journal of Electronic Imaging, proposed by Manoharan and Daniel (<xref ref-type="bibr" rid="B9">9</xref>), &#x201C;Driver aid system with robust lane recognition and traffic sign recognition.&#x201D; Hechri et al. (<xref ref-type="bibr" rid="B10">10</xref>) developed International Journal of Computational Science and Engineering. &#x201C;PointLaneNet is an efficient end-to-end CNNs for accurate instantaneously lane detection,&#x201D; the IV IEEE symposium on intelligent vehicles. Chen et al. (<xref ref-type="bibr" rid="B11">11</xref>) suggested IEEE, 2019, &#x201C;Based on an enhanced Hough transform and the least-squares method, lanes can be recognize and monitored.&#x201D; International symposium on optoelectronic technology and application 2014: Image processing and pattern recognition. Vol. 9301. SPIE, 2014 proposed by Sun et al. (<xref ref-type="bibr" rid="B12">12</xref>), &#x201C;An effective lane recognizing and monitoring technique for road safety.&#x201D; The ICASERT 2019 conference is the first worldwide gathering on breakthroughs in science, engineering, and robotics technology. Barua et al. (<xref ref-type="bibr" rid="B14">14</xref>) created IEEE, 2019, &#x201C;Recognition of night time lane markings using Canny detection and the Hough transform.&#x201D; Continuous computing and robotics conference of the IEEE 2016, or RCAR (2016) as reported by Li et al. (<xref ref-type="bibr" rid="B13">13</xref>), &#x201C;Road lane recognition and monitoring using Hough transform and inter-frame clustering.&#x201D; IEEE I2MTC 2022: International conference on instrumentation and measurement technology. Bisht et al. (<xref ref-type="bibr" rid="B15">15</xref>) organized IEEE, 2022, &#x201C;Driver assistance system with recognition of lanes and road signs,&#x201D; IJCSI International Journal of Computer Science, proposed by Hechri and Mtibaa (<xref ref-type="bibr" rid="B16">16</xref>), &#x201C;Modified additive for improved parallel lane recognition Hutch transformation,&#x201D; International Journal of Image, Graphics and Signal Processing, developed by Katru et al. (<xref ref-type="bibr" rid="B17">17</xref>), &#x201C;A method for lane recognition based on vision intelligence,&#x201D; Computers &#x0026; Electrical Engineering, developed by Yi et al. (<xref ref-type="bibr" rid="B18">18</xref>), &#x201C;Finding the road lane and taillights in a nighttime environment to detect vehicles.&#x201D; The IIH-MSP 2015 conference is an international gathering on intelligent information concealment. Chen et al.&#x2019;s (<xref ref-type="bibr" rid="B19">19</xref>), 2015, proposal for IEEE, &#x201C;A review article on the algorithms used by advanced driver assistance systems for lane sensing and tracing.&#x201D; The seventh ICCES (International conference on communication and electronics systems) will be held in 2022. Machaiah et al. (<xref ref-type="bibr" rid="B20">20</xref>) created IEEE, 2022, &#x201C;Continuous lane marking recognition for embedded systems: a robust approach image &#x0026; Graphics: Proceedings of the Third international conference, ICIG 2015, Tianjin, China, August 13&#x2013;16, 2015.&#x201D; Guo et al.&#x2019;s (<xref ref-type="bibr" rid="B21">21</xref>) description of Springer International Publishing from 2015. &#x201C;A path to autonomous vehicles is provided by advanced driver assistance systems,&#x201D; Kukkala et al.&#x2019;s (<xref ref-type="bibr" rid="B22">22</xref>). suggestion appeared in IEEE Consumer Electronics Magazine.</p>
</sec>
<sec id="S1.SS2">
<title>1.2. Equations</title>
<p>The gradient&#x2019;s size and direction are computed as follows:</p>
<disp-formula id="S1.Ex1">
<mml:math id="M1">
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">G</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi mathvariant="normal">x</mml:mi>
<mml:mo rspace="3.5pt">,</mml:mo>
<mml:mi mathvariant="normal">y</mml:mi>
<mml:mo rspace="2.8pt" stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="4.8pt">=</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mpadded width="+3pt">
<mml:mn>1</mml:mn>
</mml:mpadded>
<mml:mo rspace="3.5pt">/</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>&#x03C0;</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:msup>
<mml:mi mathvariant="normal">&#x03C3;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="2.5pt" stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo rspace="3.5pt">&#x00D7;</mml:mo>
<mml:mpadded width="+3pt">
<mml:mi>exp</mml:mi>
</mml:mpadded>
</mml:mrow>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mo>-</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mpadded width="+3pt">
<mml:msup>
<mml:mi mathvariant="normal">x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mpadded>
<mml:mo rspace="3.5pt">+</mml:mo>
<mml:msup>
<mml:mi mathvariant="normal">y</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mo rspace="4.5pt" stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo rspace="3.5pt">/</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mo>&#x2062;</mml:mo>
<mml:msup>
<mml:mi mathvariant="normal">&#x03C3;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>Magnitude:</p>
<disp-formula id="S1.Ex2">
<mml:math id="M2">
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:mi>Mag</mml:mi>
</mml:mpadded>
<mml:mo rspace="3.8pt">=</mml:mo>
<mml:mrow>
<mml:mi>sqrt</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>Gx</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo rspace="3.5pt">+</mml:mo>
<mml:msup>
<mml:mi>Gy</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>Direction can be computed as:</p>
<disp-formula id="S1.Ex3">
<mml:math id="M3">
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:mi>MDir</mml:mi>
</mml:mpadded>
<mml:mo rspace="3.8pt">=</mml:mo>
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:msup>
<mml:mi>atan</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mpadded>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>Gy</mml:mi>
<mml:mo rspace="3.5pt">,</mml:mo>
<mml:mi>Gx</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where sqrt &#x2192; is the square root function.</p>
<p>Parameter space can be calculated as:</p>
<disp-formula id="S1.Ex4">
<mml:math id="M4">
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:mi mathvariant="normal">&#x03C1;</mml:mi>
</mml:mpadded>
<mml:mo rspace="3.8pt">=</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">x</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mtext>cos</mml:mtext>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="normal">&#x03B8;</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="3.5pt">+</mml:mo>
<mml:mrow>
<mml:mi mathvariant="normal">y</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mtext>sin</mml:mtext>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="normal">&#x03B8;</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>A finite impulse response (FIR):</p>
<disp-formula id="S1.Ex5">
<mml:math id="M5">
<mml:mrow>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2062;</mml:mo>
<mml:mpadded width="+1pt">
<mml:mi>x</mml:mi>
</mml:mpadded>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="3.5pt">+</mml:mo>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="3.5pt">+</mml:mo>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="S1.Ex6">
<mml:math id="M7">
<mml:mrow>
<mml:mrow>
<mml:mo rspace="10.8pt">=</mml:mo>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>n</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="3.5pt">.</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mo rspace="3.5pt">+</mml:mo>
<mml:mpadded width="+3pt">
<mml:mi mathvariant="normal">&#x22EF;</mml:mi>
</mml:mpadded>
</mml:mrow>
<mml:mo rspace="3.5pt">+</mml:mo>
<mml:mrow>
<mml:mpadded width="+3pt">
<mml:mi>h</mml:mi>
</mml:mpadded>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mpadded width="+3pt">
<mml:mi>n</mml:mi>
</mml:mpadded>
<mml:mo>-</mml:mo>
<mml:mn>&#x2005;1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where:</p>
<p><italic>y(n)</italic> is the output samples at time <italic>n</italic>.</p>
<p><italic>h(n)</italic> is the coefficient of nth input sample is <italic>h(n)</italic> and <italic>x(n)</italic> is <italic>n</italic>th input sample at time <italic>n</italic>.</p>
</sec>
</sec>
<sec id="S2">
<title>2. Overview of lane-line RTRD algorithm</title>
<p>A key component of ADAS, which aims to improve safety and convenience while driving, is the dynamic Hough transform for robust lane detection and navigation in real time. The commonly used image processing method known as the Hough transform is crucial for locating lines in an image. The process starts by acquiring a road image using a camera that is mounted on the car. The acquired image is then preprocessed to enhance its quality and get rid of any extra noise. The preprocessed image is then subjected to the Hough transform in order to identify the lines that serve as road markings.</p>
<p>In this method, each pixel in the picture is treated as a potential point on a line. All of these points are then translated via the Hough transform into a parameter space, where each point corresponds to a line in the original image. All lines in the picture, including lane lines, can be detected with this mapping. Following the lane lines&#x2019; detection, their position in relation to the vehicle is tracked in real time. By estimating the point of disappearance of the lane lines and using it as a reference for tracking their position, this tracking is achieved. The vanishing point is the point where the lane lines combine in the distance.</p>
<p>The recognized and tracked lane lines are superimposed on the provided image to improve driver assistance and provide a visual cue for keeping in the lane. This visual aid not only helps the driver, but ADAS can also use it to add extra safety features like lane departure alert devices. Overall, continuous road lane line recognition and monitoring utilizing the Hough transform is an essential ADAS technology that gives drivers increased road safety and convenience.</p>
<p>The proposed method involves the following steps:</p>
<list list-type="simple">
<list-item>
<label>1.</label>
<p>Image acquisition: A camera installed on the car takes a picture of the road in its path.</p>
</list-item>
<list-item>
<label>2.</label>
<p>Preprocessing: To improve the quality of the acquired image and get rid of any noise or undesired artifacts, the image is preprocessed. This may involve methods like edge detection, smoothing, and contrast amplification.</p>
</list-item>
<list-item>
<label>3.</label>
<p>Region of interest (ROI) selection: A ROI is selected inside the image where the existence of lane lines is anticipated in order to improve the accuracy of lane-line recognition and reduce computational overhead.</p>
</list-item>
<list-item>
<label>4.</label>
<p>Hough transform: The preprocessed image is subjected to the Hough transform in order to locate the lane lines inside the designated ROI. By mapping every potential point on a line within the image to a parameter space, this approach generates a curve. The placement of the lane lines in image IV serves as a visual cue to where these curves connect.</p>
</list-item>
<list-item>
<label>5.</label>
<p>Lane-line monitoring: The location of the identified lane lines with respect to the vehicle is tracked in real time. Estimating the vanishing point of the lane lines and utilizing it as a guide to track their location are required for this.</p>
</list-item>
<list-item>
<label>6.</label>
<p>Visualization of lane lines: The detected and tracked lane lines are superimposed on the original image to give the driver a visual cue to help them stay in their lane. Additionally, ADAS can use this data to give extra safety features like lane departure warning systems.</p>
</list-item>
</list>
<p>A trustworthy and effective strategy to increase driver convenience and safety while driving is the suggested method of Road Lane-Lines tracking in the present and tracking using Hough transform in enhanced driving assistance system. It may be applied to many different ADAS systems and is easily adjustable to different lane configurations and driving circumstances.</p>
<p>To facilitate visualization, the Lane-Line RTD algorithm&#x2019;s operation is depicted through the flowchart depicted in <xref ref-type="fig" rid="F1">Figure 1</xref>. Subsequently, the resulting straight lane lines are showcased on the original color image, as exemplified in the illustration provided in <xref ref-type="fig" rid="F2">Figure 2</xref>.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption><p>The flowchart for the lane-line RTRD algorithm.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2022-27-g001.tif"/>
</fig>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption><p>The lane-line RTRD algorithm detects and identifies the boundaries of the detected lane lines.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2022-27-g002.tif"/>
</fig>
</sec>
<sec id="S3">
<title>3. Canny edge detector</title>
<p>The proposed algorithm consists of a number of phases, each of which is represented mathematically. The main equations used in the Canny edge detector algorithm are listed below:</p>
<sec id="S3.SS1">
<title>Gaussian smoothing</title>
<p>To reduce noise in an image and make edges simpler to discern, the Gaussian smoothing filter is utilized. The smoothing process is carried out using a matrix called the 2D Gaussian kernel. This is how the kernel is described:</p>
<disp-formula id="S3.Ex7">
<mml:math id="M9">
<mml:mrow>
<mml:mrow>
<mml:mtext>G</mml:mtext>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
<mml:mo rspace="3.8pt" stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="3.8pt">=</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>&#x03C0;</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:msup>
<mml:mi mathvariant="normal">&#x03C3;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>&#x00D7;</mml:mo>
<mml:mi>exp</mml:mi>
</mml:mrow>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mo>-</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi mathvariant="normal">x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mi mathvariant="normal">y</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>/</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mo>&#x2062;</mml:mo>
<mml:msup>
<mml:mi mathvariant="normal">&#x03C3;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where:</p>
<p><italic>x</italic> and <italic>y</italic> are the coordinates the points in the plane.</p>
<p>The Gaussian distribution&#x2019;s standard deviation is &#x03C3;.</p>
<p>exp is the exponential function.</p>
<p>&#x03C0; is the mathematical constant 3.14159.</p>
<p>An origin-centered bell-shaped curve shows the Gaussian functions. The breadth of the curve is determined by the value of &#x03C3;. A higher value of &#x03C3; corresponds to a larger curve, whereas a lower value of &#x03C3; corresponds to a smaller curve.</p>
</sec>
<sec id="S3.SS2">
<title>Gradient detection</title>
<p>The Sobel operator, which finds regions of the image with the highest intensity fluctuations, is used to determine the gradient of the image. The two kernels of the operator, Gx and Gy, are convolved with the input picture to produce the x and y derivatives. The gradient&#x2019;s size and direction are then determined using the following formulas:</p>
</sec>
<sec id="S3.SS3">
<title>Magnitude</title>
<disp-formula id="S3.Ex8">
<mml:math id="M11">
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:mi>Mag</mml:mi>
</mml:mpadded>
<mml:mo rspace="3.8pt">=</mml:mo>
<mml:mrow>
<mml:mi>sqrt</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>Gx</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mi>Gy</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</sec>
<sec id="S3.SS4">
<title>Direction</title>
<disp-formula id="S3.Ex9">
<mml:math id="M13">
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:mi>Dir</mml:mi>
</mml:mpadded>
<mml:mo rspace="3.8pt">=</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>atan</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>Gy</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>Gx</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where sqrt is the square root function and atan2 is the arctangent function.</p>
</sec>
<sec id="S3.SS5">
<title>Non-maximum suppression</title>
<p>Edge detection uses a post-processing technique to thin the edges and save only the strongest boundary pixels.</p>
<p>The NMS algorithm compares the magnitude of each pixel in the gradient magnitude image to the magnitudes of the next pixels in the gradient&#x2019;s direction. The magnitude of the current pixel is set to zero if it is not the maximum along the gradient&#x2019;s direction. As a result, only local maxima in the gradient direction are kept. In other words, NMS removes from the gradient any pixels that do not correspond to local maxima. This aids in sharpening the edges and removing noise and artifacts from the edge image.</p>
</sec>
<sec id="S3.SS6">
<title>Thresholding</title>
<p>To generate the final edge map, a thresholding step using either a small filter or a big filter is applied to the non-maximum suppressed acquired picture. Strong edges are discovered when pixel magnitude values exceed the high threshold; weak edges are recognized when they fall between the minimal and larger standards. Weak edges are only kept in place when they link to strong edges. The Canny edge detector method performs actions like thresholding, non-maximum suppression, gradient detection, and Gaussian filtering by using a variety of mathematical formulas. In computer vision applications, the method&#x2014;which combines three processes&#x2014;is commonly used to accurately detect edges in images.</p>
</sec>
</sec>
<sec id="S4">
<title>4. Hough transform</title>
<p>The Hough transform algorithm is well-known and frequently used in the disciplines of image processing and visual analysis because it can identify lines, circles, and other shapes present in an image. This application&#x2019;s core idea is to convert each point in the picture space into a parameter space, where the parameters stand in for the shape that the point belongs to. As a result, it is easier to determine the shape since points in parameter space can be grouped together that correspond to the same form. The basic equations for the Hough transform algorithm&#x2019;s applications for finding lines are listed below.</p>
<sec id="S4.SS1">
<title>Image space</title>
<p>A point is represented by its <italic>(x, y)</italic> coordinates in the image space. A binary value can be used to represent each pixel in the image space, indicating whether or not that particular point is an edge.</p>
</sec>
<sec id="S4.SS2">
<title>Parameter space</title>
<p>In the parameter space, a line is represented by two parameters, the angle theta (&#x03B8;) and the distance rho (&#x03C1;) from the origin to the line, as shown below:</p>
<disp-formula id="S4.Ex10">
<mml:math id="M15">
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:mi mathvariant="normal">&#x03C1;</mml:mi>
</mml:mpadded>
<mml:mo rspace="3.8pt">=</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>cos</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi mathvariant="normal">&#x03B8;</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>sin</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi mathvariant="normal">&#x03B8;</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>Let <italic>x</italic> and <italic>y</italic> represent the coordinates of a point located on a line. The line is characterized by the angle &#x03B8; it makes with a reference axis, and &#x03C1; denotes the perpendicular distance from the origin (0, 0) to the line.</p>
</sec>
<sec id="S4.SS3">
<title>Accumulator</title>
<p>Each element of the accumulator array created by the Hough transform method corresponds to a point in the parameter space. The equation above is used to compute the line in the parameter space that corresponds to each edge point in the picture space, and the cumulative array is increased at the corresponding (,) location. An accumulator array containing the number of points that relate to each line in the picture space is generated by repeating this procedure for all edge points in the image space.</p>
</sec>
<sec id="S4.SS4">
<title>Thresholding</title>
<p>The accumulator array is subjected to a thresholding step in order to identify the lines with the largest number of points. These lines, which may be recovered and plotted back onto the original picture, correspond to the lines in the image space that are most likely to be genuine edges. The Hough transform algorithm converts points from the image space to the parameter space, adds up the points in the parameter space, and then extracts the lines that correspond to the most significant points using a series of mathematical equations. The Hough transform, which is frequently used in computer vision applications, can reliably identify line markings in a collected image by combining these processes.</p>
</sec>
</sec>
<sec id="S5">
<title>5. Implementation of the lane-line RTRD algorithm</title>
<p>The implementation of this algorithm involves writing code in Python using the OpenCV library. Here is an overview of the implementation steps:</p>
<list list-type="simple">
<list-item>
<label>1.</label>
<p>Choosing a video or picture file and decoding it: The &#x201C;picture_test&#x201D; directory provides a series of example images for evaluation and testing, and the lane-line RTRD algorithm scans each one in turn. The os.listdirect() function, which produces a list of all files in the directory, is used to read the pictures in alphabetical order. The lane-line RTRD algorithm evaluates the pictures in a certain order; hence, the order of the pictures is essential. <xref ref-type="fig" rid="F3">Figure 3</xref> shows an example of one of these pictures.</p>
</list-item>
</list>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption><p>The image in its original form.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2022-27-g003.tif"/>
</fig>
<list list-type="simple">
<list-item>
<label>2.</label>
<p>Grayscale image conversion from color: The cv2.cvtColor() method from the OpenCV package is used to convert color to grayscale. The input image (in this case, the color test image) and the color conversion code, which specifies the intended type of conversion, are the two inputs that this function expects. The correct code to convert a color image to grayscale is cv2.COLOR_BGR2GRAY. <xref ref-type="fig" rid="F4">Figure 4</xref> shows the image in <xref ref-type="fig" rid="F3">Figure 3</xref> in grayscale.</p>
</list-item>
</list>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption><p>The image in its original form, represented in grayscale.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2022-27-g004.tif"/>
</fig>
<list list-type="simple">
<list-item>
<label>3.</label>
<p>Noise reduction: To smooth a picture, use the OpenCV library&#x2019;s cv2.GaussianBlur() function. <xref ref-type="fig" rid="F5">Figure 5</xref> shows the image in <xref ref-type="fig" rid="F5">Figure 5</xref> after filtering. Three arguments are required by the function:</p>
</list-item>
</list>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption><p>After using a Gaussian blur filter, the captured picture is displayed in grayscale.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2022-27-g005.tif"/>
</fig>
<p>The source picture, which is a test image in grayscale.</p>
<p>The kernel size (which determines the size of the Gaussian filter).</p>
<p>Standard deviation of the Gaussian function determines how much smoothing is required.</p>
<p>A picture is smoothed using the Gaussian filter, which lowers noise and makes it simpler to spot edges. The filter&#x2019;s size is determined by the kernel size, and the amount of smoothing is determined by the Gaussian function&#x2019;s standard deviation.</p>
<p>In the example, the kernel size is set to (5, 5), which means that the Gaussian filter will be a 5 &#x00D7; 5 matrix. The standard deviation of the Gaussian function is set to 1.5, which means that the filter will be relatively smooth. The cv2.GaussianBlur() function is a powerful tool for smoothing images. It is used in a variety of applications, such as edge detection and noise removal.</p>
<list list-type="simple">
<list-item>
<label>4.</label>
<p>Edge identification and extraction: The Canny edge detection methodology is used to identify the edge of the markings for the lane in the blurred image. This particular action makes use of the Canny() function. <xref ref-type="fig" rid="F6">Figure 6</xref> displays the final image with the identified edges. <xref ref-type="table" rid="T1">Table 1</xref> contains a list of the operating parameters that have been carefully selected for the algorithm after numerous iterations of trial and error.</p>
</list-item>
</list>
<fig id="F6" position="float">
<label>FIGURE 6</label>
<caption><p>The picture with Canny edge detection.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2022-27-g006.tif"/>
</fig>
<table-wrap position="float" id="T1">
<label>TABLE 1</label>
<caption><p>Canny Edge Algorithm Parameters.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Parameter for threshold</td>
<td valign="top" align="center">Value used</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Low</td>
<td valign="top" align="center">50</td>
</tr>
<tr>
<td valign="top" align="left">High</td>
<td valign="top" align="center">150</td>
</tr>
</tbody>
</table></table-wrap>
<list list-type="simple">
<list-item>
<label>5.</label>
<p>Region of Interest: Choose the image&#x2019;s (<xref ref-type="fig" rid="F6">Figure 6</xref>) expected location for the lane markers as the ROI. This normally covers the portion of the road in front of the car and has a trapezoidal shape shown in <xref ref-type="fig" rid="F7">Figure 7</xref>. The fillPoly() method in OpenCV can be used to make a mask that specifies the ROI (<xref ref-type="table" rid="T2">Table 2</xref>).</p>
</list-item>
</list>
<fig id="F7" position="float">
<label>FIGURE 7</label>
<caption><p>The pictures after creating lane lines using parts of Hough lines.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2022-27-g007.tif"/>
</fig>
<table-wrap position="float" id="T2">
<label>TABLE 2</label>
<caption><p>Region of Interest (ROI) vertices.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Vertices of trapezoidal shape for image</td>
<td valign="top" align="center">Value</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">trap_bottom_width</td>
<td valign="top" align="center">0.85</td>
</tr>
<tr>
<td valign="top" align="left">trap_top_width</td>
<td valign="top" align="center">0.07</td>
</tr>
<tr>
<td valign="top" align="left">trap_height</td>
<td valign="top" align="center">0.4</td>
</tr>
</tbody>
</table></table-wrap>
<list list-type="simple">
<list-item>
<label>6.</label>
<p>To determine lane lines using the Hough transform, we apply the Hough algorithm to the edge image using the OpenCV function Hough LinesP(). Some of the parameters that this function accepts include the edge image, the parameter space resolution, and the line recognition threshold (<xref ref-type="table" rid="T3">Table 3</xref>). The result of the function is a set of line segments that correspond to the recognized lane markers. The Hough transform is initially applied to the edge picture, which is first converted into a binary image using the HoughLinesP() technique. The Hough transform produces line segments that are shown as pairs of endpoints. <xref ref-type="fig" rid="F8">Figures 8</xref>, <xref ref-type="fig" rid="F9">9</xref> show the subsequent identified line segments. The right and left routes are always denoted by blue and red lines, respectively.</p>
</list-item>
</list>
<table-wrap position="float" id="T3">
<label>TABLE 3</label>
<caption><p>Hough transform parameters.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Parameter</td>
<td valign="top" align="center">Value</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">rho</td>
<td valign="top" align="center">2</td>
</tr>
<tr>
<td valign="top" align="left">theta (&#x03B8;)</td>
<td valign="top" align="center">1 &#x00D7; np.pi/180</td>
</tr>
<tr>
<td valign="top" align="left">threshold</td>
<td valign="top" align="center">15</td>
</tr>
<tr>
<td valign="top" align="left">min_line_length</td>
<td valign="top" align="center">10</td>
</tr>
<tr>
<td valign="top" align="left">max_line_gap</td>
<td valign="top" align="center">20</td>
</tr>
<tr>
<td valign="top" align="left">line_thickness</td>
<td valign="top" align="center">2</td>
</tr>
</tbody>
</table></table-wrap>
<fig id="F8" position="float">
<label>FIGURE 8</label>
<caption><p>The picture after Canny edge detection and region of interest.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2022-27-g008.tif"/>
</fig>
<fig id="F9" position="float">
<label>FIGURE 9</label>
<caption><p>The pictures after creating lane lines using parts of Hough lines.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2022-27-g009.tif"/>
</fig>
<p>Compute the position of the vehicle in relation to the lane center and its curvature using the detected lane lines. This stage usually entails fitting a polynomial to the lane lines and determining the parameters of the polynomial. For this stage, you can utilize the polyfit() function.</p>
<p>Using the line() and fillPoly() functions of OpenCV, draw the recognized lane lines and the lane area on the original input image. Use the imshow() function to display the final image with the identified lane markings superimposed over the initial input image.</p>
<p>Repeat steps 1&#x2013;6 for each frame of a video stream to perform real-time lane identification.</p>
</sec>
<sec id="S6">
<title>6. The implemented lane-line RTRD drawing function</title>
<p>As illustrated in Figure 8, the technique referred to as &#x201C;create Lines()&#x201D; achieves the creation of a continuous single line by connecting the left or right Hough transform corresponding to each lane line. This outcome bears resemblance to the continuous lines evident in <xref ref-type="fig" rid="F10">Figures 10</xref> and <xref ref-type="fig" rid="F11">11</xref>. The &#x201C;create Lines()&#x201D; approach accomplishes this objective through a sequence of steps, which encompass the following:</p>
<fig id="F10" position="float">
<label>FIGURE 10</label>
<caption><p>The tested image featured a combination of lane lines are solid yellow and dotted white, with the presence of cars in the right lane.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2022-27-g010.tif"/>
</fig>
<fig id="F11" position="float">
<label>FIGURE 11</label>
<caption><p>The picture contains solid yellow and dotted white lane lines segments. The lane lines are in a left-turned lane, and there are no cars in the image.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2022-27-g011.tif"/>
</fig>
<list list-type="simple">
<list-item>
<label>(1)</label>
<p>Categorization (Left or Right): Each Hough line segment is sorted into either the left or right category based on its angle of inclination. A segment with a positive slope within the range of 0.4 to 1.0 is classified as belonging to the left lane line, while a segment with a negative slope within the range of -0.4 to 1.0 is classified as belonging to the right lane line.</p>
</list-item>
<list-item>
<label>(2)</label>
<p>Computation and Recording: The lengths, intercept points (where they intersect the x-axis), and relevant slopes of all categorized left and right line segments are computed and logged. 3) Boundary Development: Once information from preceding frames has been adequately incorporated, the Left and Right lane boundaries can take shape. However, before this is achieved, a Nth order filter is employed to diminish jitter. It&#x2019;s important to highlight that these operations come together to constitute the &#x201C;create Lines()&#x201D; technique. This technique generates a unified and uninterrupted portrayal of lane lines, serving as a foundation for subsequent analysis.</p>
</list-item>
</list>
<p>As demonstrated in <xref ref-type="fig" rid="F9">Figure 9</xref>, the &#x201C;create Lines()&#x201D; technique creates a single continuous line by joining the left or right Hough transform of each lane line. It looks similar to the continuous lines in <xref ref-type="fig" rid="F7">Figures 7</xref> and <xref ref-type="fig" rid="F10">10</xref>. The &#x201C;create Lines()&#x201D; method accomplishes this by carrying out a series of operations, including:</p>
<list list-type="simple">
<list-item>
<label>(1)</label>
<p>Marking (left or right): Each Hough line segment is grouped to the left or right of the lines, depending on its inclination. When a segment&#x2019;s positive slope falls between 0.4 and 1.0, it is classified as belonging to the left lane line class, and when its negative slope falls between &#x2212;0.4 and 1.0, it is classified as belonging to the right line class.</p>
</list-item>
<list-item>
<label>(2)</label>
<p>The lengths, intercepts (points where they cross the x-axis), and related slopes of all categorized left and right line segments are computed and recorded.</p>
</list-item>
<list-item>
<label>(3)</label>
<p>If the information from the prior frames has been efficiently covered, the left and right borders can develop, but first, the Nth-order filter is used to reduce jitter.</p>
</list-item>
<list-item>
<label>(4)</label>
<p>A FIR filter is a type of digital filter that uses a finite number of input samples to produce a finite number of output samples. The mathematical equation for a FIR filter is as follows:</p>
</list-item>
</list>
<disp-formula id="S6.Ex11">
<mml:math id="M17">
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">h</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo rspace="3.8pt" stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="3.8pt">&#x00D7;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo rspace="3.8pt" stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="3.8pt">+</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">h</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo rspace="3.8pt" stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="3.8pt">&#x00D7;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo rspace="3.8pt" stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="3.8pt">+</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">h</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo rspace="3.8pt" stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="3.8pt">&#x00D7;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo rspace="3.8pt" stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="3.8pt">=</mml:mo>
<mml:mrow>
<mml:mi mathvariant="normal">y</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi mathvariant="normal">n</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="S6.Ex12">
<mml:math id="M19">
<mml:mrow>
<mml:mrow>
<mml:mo lspace="5.8pt" rspace="3.8pt">+</mml:mo>
<mml:mpadded width="+3.3pt">
<mml:mi mathvariant="normal">&#x22EF;</mml:mi>
</mml:mpadded>
</mml:mrow>
<mml:mo rspace="3.8pt">+</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">h</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi mathvariant="normal">n</mml:mi>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo rspace="3.8pt" stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="3.8pt">&#x00D7;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi mathvariant="normal">n</mml:mi>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where:</p>
<p>y(n) is the output sample at time n.</p>
<p>h(n) is the coefficient of the nth input sample and X(n) is the input sample at time n.</p>
<p>The FIR filter&#x2019;s coefficients are determined by the design of the filter. The design of the filter can be optimized to achieve a desired response.</p>
<p>The resulting FIR filter coefficients and the corresponding parameters are presented in <xref ref-type="table" rid="T4">Table 4</xref>.</p>
<table-wrap position="float" id="T4">
<label>TABLE 4</label>
<caption><p>The resulting finite impulse response filter coefficients and the corresponding parameters.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Parameter</td>
<td valign="top" align="center">Parameter values</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Order</td>
<td valign="top" align="center">7</td>
</tr>
<tr>
<td valign="top" align="left">a<sub>0</sub></td>
<td valign="top" align="center">0.008</td>
</tr>
<tr>
<td valign="top" align="left">a<sub>1</sub></td>
<td valign="top" align="center">0.032</td>
</tr>
<tr>
<td valign="top" align="left">a<sub>2</sub></td>
<td valign="top" align="center">0.080</td>
</tr>
<tr>
<td valign="top" align="left">a<sub>3</sub></td>
<td valign="top" align="center">0.128</td>
</tr>
<tr>
<td valign="top" align="left">a<sub>4</sub></td>
<td valign="top" align="center">0.176</td>
</tr>
<tr>
<td valign="top" align="left">a<sub>5</sub></td>
<td valign="top" align="center">0.224</td>
</tr>
<tr>
<td valign="top" align="left">a<sub>6</sub></td>
<td valign="top" align="center">0.256</td>
</tr>
</tbody>
</table></table-wrap>
<list list-type="simple">
<list-item>
<label>(5)</label>
<p>In step 5 of the pipeline, the slopes, intercepts, and ROI (V) determined in both the left and right boundaries should be drawn in the colored lavender.</p>
</list-item>
</list>
</sec>
<sec id="S7">
<title>7. Testing and validation</title>
<p>The proposed method, created for lane-line identification, is put to the test using a variety of pictures that reflect various scenarios. The results of these experiments, presented in <xref ref-type="fig" rid="F11">Figures 11</xref>&#x2013;<xref ref-type="fig" rid="F14">14</xref>, effectively prove the algorithm&#x2019;s effectiveness under diverse circumstances. In order to ensure the pipeline&#x2019;s stability, the algorithm is also applied to many samples of streaming footage showing different types of driving circumstances. But one circumstance stands out when erroneous darker areas might fool the system and cause lane-line recognition errors, as shown in <xref ref-type="fig" rid="F14">Figure 14</xref>. Given that the proposed method has generally shown to be very resilient, it is imperative to address this issue in subsequent work in order to improve the algorithm&#x2019;s performance.</p>
<fig id="F12" position="float">
<label>FIGURE 12</label>
<caption><p>The picture displays segments of lane lines are solid yellow and dotted white.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2022-27-g012.tif"/>
</fig>
<fig id="F13" position="float">
<label>FIGURE 13</label>
<caption><p>The test image showcases segments of lane lines are solid yellow and dotted white, representing a left lane without any cars.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2022-27-g013.tif"/>
</fig>
<fig id="F14" position="float">
<label>FIGURE 14</label>
<caption><p>The presence of shadow patterns can result in the inaccurate detection of lane lines.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2022-27-g014.tif"/>
</fig>
<p>It proved that the pipeline&#x2019;s execution speed was suitable for real-time use. A reasonable computational platform with a 2.8 GHz Intel Core i5 processor and 16 GB of RAM was used to examine three sample video streams. The measurements that result are shown in <xref ref-type="table" rid="T5">Table 5</xref>:</p>
<table-wrap position="float" id="T5">
<label>TABLE 5</label>
<caption><p>The computation speed results for the Lane-Line RTRD algorithm.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Sample video name</td>
<td valign="top" align="center">Frames captured</td>
<td valign="top" align="center">Overall time in second</td>
<td valign="top" align="center">Frames per second</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Left line video</td>
<td valign="top" align="center">510</td>
<td valign="top" align="center">35</td>
<td valign="top" align="center">12.76</td>
</tr>
<tr>
<td valign="top" align="left">Right line video</td>
<td valign="top" align="center">202</td>
<td valign="top" align="center">5.0</td>
<td valign="top" align="center">21.2</td>
</tr>
<tr>
<td valign="top" align="left">Final video</td>
<td valign="top" align="center">240</td>
<td valign="top" align="center">21</td>
<td valign="top" align="center">10.3</td>
</tr>
</tbody>
</table></table-wrap>
<p>For accurately detecting lane lines, 11 frames per second is the minimum processing speed that has been measured. This proves to be sufficient. In fact, 10 frames per second is considered suitable for this application without encountering any plagiarism concerns.</p>
</sec>
<sec id="S8">
<title>8. Suggested improvements</title>
<p>The improvements listed below are suggested:</p>
<list list-type="simple">
<list-item>
<label>1.</label>
<p>When utilizing the line-fitting technique, employ the line-segment length as a criterion to separate strong and weak line segments.</p>
</list-item>
<list-item>
<label>2.</label>
<p>Conduct additional research on the design of the FIR filter, including studying higher orders and experimenting with other lower-pass polynomials, including Butterworth, Chebyshev, Elliptical, and others.</p>
</list-item>
<list-item>
<label>3.</label>
<p>Consider how important it is to follow traffic laws to determine the type of lane (dashed or solid) to include in the algorithm.</p>
</list-item>
</list>
</sec>
<sec id="S9" sec-type="conclusion">
<title>9. Conclusion</title>
<p>The method used to identify and monitor lane lines is described in this study. The suggested solution uses well-known algorithms like Canny edge detection and the Hough transform to be quick and reliable. Additionally, it features a clever technique for locating and depicting lane lines. The suggested method only needs natural RGB images taken by a single CCD camera mounted behind the front windshield of the car. Utilizing a range of static images and live videos, the effectiveness of the proposed algorithm was thoroughly evaluated. The results showed that the suggested method can reliably and precisely identify lane boundaries, with the exception of scenarios with complex shadow patterns. The measured throughput using a low-cost CPU shows that lane-line RTRD is well suited for continuous lane recognition with little computational overhead. This qualifies it for inclusion in ADAS or self-driving vehicles.</p>
<p>The suggested method is thoroughly examined and analyzed, taking into account both its advantages and disadvantages. The effectiveness, performance, and dependability of the method are thoroughly assessed. The benefits of the method are emphasized, including its quick processing time, precise lane recognition, and low computing overhead. The technique&#x2019;s drawbacks and difficulties are acknowledged and explained as well. Among them could be challenges in dealing with complex situations like shadow patterns or specific environmental factors that could impair the accuracy of lane line identification.</p>
</sec>
<sec id="S10" sec-type="author-contributions">
<title>Author contributions</title>
<p>SH and BS contributed to the article and approved the submitted version.</p>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="B1"><label>1.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Al Smadi</surname> <given-names>T</given-names></name></person-group>. <article-title>Real-time lane detection for driver assistance system.</article-title> <source><italic>Circ Syst.</italic></source> (<year>2014</year>) <volume>5</volume>:<fpage>201</fpage>&#x2013;<lpage>7</lpage>.</citation></ref>
<ref id="B2"><label>2.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kodeeswari</surname> <given-names>M</given-names></name> <name><surname>Daniel</surname> <given-names>P</given-names></name></person-group>. <article-title>Lane line detection in real time based on morphological operations for driver assistance system.</article-title> <source><italic>2017 4th International conference on signal processing, computing and control (ISPCC).</italic></source> <publisher-loc>Piscataway, NJ</publisher-loc>: <publisher-name>IEEE</publisher-name> (<year>2017</year>).</citation></ref>
<ref id="B3"><label>3.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Waykole</surname> <given-names>S</given-names></name> <name><surname>Shiwakoti</surname> <given-names>N</given-names></name> <name><surname>Stasinopoulos</surname> <given-names>P</given-names></name></person-group>. <article-title>Review of lane detection and tracking algorithms of advanced driver assistance system.</article-title> <source><italic>Sustainability.</italic></source> (<year>2021</year>) <volume>13</volume>:<issue>11417</issue>.</citation></ref>
<ref id="B4"><label>4.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mulyanto</surname> <given-names>A</given-names></name> <name><surname>Borman</surname> <given-names>RI</given-names></name> <name><surname>Prasetyawan</surname> <given-names>P</given-names></name> <name><surname>Jatmiko</surname> <given-names>W</given-names></name> <name><surname>Mursanto</surname> <given-names>P</given-names></name></person-group>. <article-title>Real-time human detection and tracking using two sequential frames for advanced driver assistance system.</article-title> <source><italic>2019 3rd International conference on informatics and computational sciences (ICICoS).</italic></source> <publisher-loc>Piscataway, NJ</publisher-loc>: <publisher-name>IEEE</publisher-name> (<year>2019</year>).</citation></ref>
<ref id="B5"><label>5.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wei</surname> <given-names>X</given-names></name> <name><surname>Zhang</surname> <given-names>Z</given-names></name> <name><surname>Chai</surname> <given-names>Z</given-names></name> <name><surname>Feng</surname> <given-names>W</given-names></name></person-group>. <article-title>Research on lane detection and tracking algorithm based on improved Hough transform.</article-title> <source><italic>Intelligent robotic and control engineering (IRCE) 2018 IEEE international conference.</italic></source> <publisher-loc>Piscataway, NJ</publisher-loc>: <publisher-name>IEEE</publisher-name> (<year>2018</year>).</citation></ref>
<ref id="B6"><label>6.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kumar</surname> <given-names>S</given-names></name> <name><surname>Jailia</surname> <given-names>M</given-names></name> <name><surname>Varshney</surname> <given-names>S</given-names></name></person-group>. <article-title>An efficient approach for highway lane detection based on the Hough transform and Kalman filter.</article-title> <source><italic>Innov Infrastruct Solut.</italic></source> (<year>2022</year>) <volume>7</volume>:<issue>290</issue>.</citation></ref>
<ref id="B7"><label>7.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gaikwad</surname> <given-names>V</given-names></name> <name><surname>Lokhande</surname> <given-names>S</given-names></name></person-group>. <article-title>Lane departure identification for advanced driver assistance.</article-title> <source><italic>IEEE Trans Intell Transport Syst.</italic></source> (<year>2014</year>) <volume>16</volume>:<fpage>910</fpage>&#x2013;<lpage>8</lpage>.</citation></ref>
<ref id="B8"><label>8.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fan</surname> <given-names>G</given-names></name> <name><surname>Bo</surname> <given-names>L</given-names></name> <name><surname>Qin</surname> <given-names>H</given-names></name> <name><surname>Rihua</surname> <given-names>J</given-names></name> <name><surname>Gang</surname> <given-names>Q</given-names></name></person-group>. <article-title>Robust lane detection and tracking based on machine vision.</article-title> <source><italic>ZTE Commun.</italic></source> (<year>2020</year>) <volume>18</volume>:<fpage>69</fpage>&#x2013;<lpage>77</lpage>.</citation></ref>
<ref id="B9"><label>9.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Manoharan</surname> <given-names>K</given-names></name> <name><surname>Daniel</surname> <given-names>P</given-names></name></person-group>. <article-title>Image processing-based framework for continuous lane recognition in mountainous roads for driver assistance system.</article-title> <source><italic>J Electron Imaging.</italic></source> (<year>2017</year>) <volume>26</volume>:<issue>063011</issue>.</citation></ref>
<ref id="B10"><label>10.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hechri</surname> <given-names>A</given-names></name> <name><surname>Hmida</surname> <given-names>R</given-names></name> <name><surname>Mtibaa</surname> <given-names>A</given-names></name></person-group>. <article-title>Robust road lanes and traffic signs recognition for driver assistance system.</article-title> <source><italic>Int J Comput Sci Eng.</italic></source> (<year>2015</year>) <volume>10</volume>:<fpage>202</fpage>&#x2013;<lpage>9</lpage>.</citation></ref>
<ref id="B11"><label>11.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>Z</given-names></name> <name><surname>Liu</surname> <given-names>C</given-names></name> <name><surname>Lian</surname> <given-names>C</given-names></name></person-group>. <article-title>PointLaneNet: efficient end-to-end CNNs for accurate real-time lane detection.</article-title> <source><italic>The IV IEEE symposium on intelligent vehicles.</italic></source> <publisher-loc>Piscataway, NJ</publisher-loc>: <publisher-name>IEEE</publisher-name> (<year>2019</year>).</citation></ref>
<ref id="B12"><label>12.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sun</surname> <given-names>P</given-names></name> <name><surname>Chen</surname> <given-names>H</given-names></name></person-group>. <article-title>Lane detection and tracking based on improved Hough transform and least-squares method.</article-title> <source><italic>International symposium on optoelectronic technology and application 2014: image processing and pattern recognition. 9301.</italic></source> <publisher-loc>Cergy-Pontoise</publisher-loc>: <publisher-name>SPIE</publisher-name> (<year>2014</year>).</citation></ref>
<ref id="B13"><label>13.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>Y</given-names></name> <name><surname>Chen</surname> <given-names>L</given-names></name> <name><surname>Huang</surname> <given-names>H</given-names></name> <name><surname>Li</surname> <given-names>X</given-names></name> <name><surname>Xu</surname> <given-names>W</given-names></name> <name><surname>Zheng</surname> <given-names>L</given-names></name><etal/></person-group> <article-title>Nighttime lane markings recognition based on Canny detection and Hough transform.</article-title> <source><italic>Real-time computing and robotics conference of the IEEE 2016, or RCAR.</italic></source> <publisher-loc>Piscataway, NJ</publisher-loc>: <publisher-name>IEEE</publisher-name> (<year>2016</year>).</citation></ref>
<ref id="B14"><label>14.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Barua</surname> <given-names>B</given-names></name> <name><surname>Biswas</surname> <given-names>S</given-names></name> <name><surname>Deb</surname> <given-names>K</given-names></name></person-group>. <article-title>An efficient method of lane detection and tracking for highway safety.</article-title> <source><italic>The ICASERT 2019 conference is the first worldwide gathering on breakthroughs in science, engineering, and robotics technology.</italic></source> <publisher-loc>Piscataway, NJ</publisher-loc>: <publisher-name>IEEE</publisher-name> (<year>2019</year>).</citation></ref>
<ref id="B15"><label>15.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bisht</surname> <given-names>S</given-names></name> <name><surname>Sukumar</surname> <given-names>N</given-names></name> <name><surname>Sumathi</surname> <given-names>P</given-names></name></person-group>. <article-title>Integration of Hough transform and inter-frame clustering for road lane detection and tracking.</article-title> <source><italic>2022 IEEE international instrumentation and measurement technology conference (I2MTC).</italic></source> <publisher-loc>Piscataway, NJ</publisher-loc>: <publisher-name>IEEE</publisher-name> (<year>2022</year>).</citation></ref>
<ref id="B16"><label>16.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hechri</surname> <given-names>A</given-names></name> <name><surname>Mtibaa</surname> <given-names>A</given-names></name></person-group>. <article-title>Lanes and road signs recognition for driver assistance system.</article-title> <source><italic>Int J Comput Sci.</italic></source> (<year>2011</year>) <volume>8</volume>:<issue>402</issue>.</citation></ref>
<ref id="B17"><label>17.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Katru</surname> <given-names>A</given-names></name> <name><surname>Kumar</surname> <given-names>A</given-names></name></person-group>. <article-title>Improved parallel lane detection using modified additive Hough transform.</article-title> <source><italic>Int J Image Graphics Signal Process.</italic></source> (<year>2016</year>) <volume>8</volume>:<fpage>10</fpage>&#x2013;<lpage>7</lpage>.</citation></ref>
<ref id="B18"><label>18.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yi</surname> <given-names>SC</given-names></name> <name><surname>Chang</surname> <given-names>CH</given-names></name> <name><surname>Chen</surname> <given-names>YC</given-names></name></person-group>. <article-title>A lane detection approach based on intelligent vision.</article-title> <source><italic>Comput Electr Eng.</italic></source> (<year>2015</year>) <volume>42</volume>:<fpage>23</fpage>&#x2013;<lpage>9</lpage>.</citation></ref>
<ref id="B19"><label>19.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>TY</given-names></name> <name><surname>Chen</surname> <given-names>CH</given-names></name> <name><surname>Luo</surname> <given-names>GM</given-names></name> <name><surname>Hu</surname> <given-names>WC</given-names></name> <name><surname>Chern</surname> <given-names>J</given-names></name></person-group>. <article-title>Vehicle detection in nighttime environment by locating road lane and taillights.</article-title> <source><italic>2015 International conference on intelligent information hiding and multimedia signal processing (IIH-MSP).</italic></source> <publisher-loc>Piscataway, NJ</publisher-loc>: <publisher-name>IEEE</publisher-name> (<year>2015</year>).</citation></ref>
<ref id="B20"><label>20.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Machaiah</surname> <given-names>G</given-names></name> <name><surname>Pavithra</surname></name> <name><surname>Gagan</surname> <given-names>PC</given-names></name></person-group>. <article-title>A review article on lane-sensing and tracing algorithms for advanced driver assistance systems.</article-title> <source><italic>The seventh ICCES (International conference on communication and electronics systems).</italic></source> <publisher-loc>Piscataway, NJ</publisher-loc>: <publisher-name>IEEE</publisher-name> (<year>2022</year>).</citation></ref>
<ref id="B21"><label>21.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Guo</surname> <given-names>Y</given-names></name> <name><surname>Zhang</surname> <given-names>Y</given-names></name> <name><surname>Liu</surname> <given-names>S</given-names></name> <name><surname>Liu</surname> <given-names>J</given-names></name> <name><surname>Zhao</surname> <given-names>Y</given-names></name></person-group>. <article-title>Robust and real-time lane marking detection for embedded system.</article-title> <source><italic>Image and graphics: part III of the proceedings from the 8th international conference, ICIG 2015, held in Tianjin, China, from August 13&#x2013;16, 2015.</italic></source> <publisher-loc>New York, NY</publisher-loc>: <publisher-name>International Springer Publishing</publisher-name> (<year>2015</year>).</citation></ref>
<ref id="B22"><label>22.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kukkala</surname> <given-names>VK</given-names></name> <name><surname>Tunnell</surname> <given-names>J</given-names></name> <name><surname>Pasricha</surname> <given-names>S</given-names></name> <name><surname>Bradley</surname> <given-names>T</given-names></name></person-group>. <article-title>Advanced driver-assistance systems: a path toward autonomous vehicles.</article-title> <source><italic>IEEE consumer electronics magazine 7.5.</italic></source> <publisher-loc>Piscataway, NJ</publisher-loc>: <publisher-name>IEEE</publisher-name> (<year>2018</year>). p. <fpage>18</fpage>&#x2013;<lpage>25</lpage>.</citation></ref>
</ref-list>
</back>
</article>
