<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Bohr. Cs.</journal-id>
<journal-title>BOHR International Journal of Computer Science</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Bohr. Cs.</abbrev-journal-title>
<issn pub-type="epub">2583-455X</issn>
<publisher>
<publisher-name>BOHR</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.54646/bijcs.2022.03</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Original Research</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>An efficient hybrid by partitioning approach for extracting maximal gradual patterns in large databases (MPSGrite)</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Cabrel</surname> <given-names>Tabueu Fotso Laurent</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Computer Engineering, UIT-FV, University of Dschang</institution>, <addr-line>Dschang</addr-line>, <country>Cameroon</country></aff>
<aff id="aff2"><sup>2</sup><institution>Department of Mathematics and Computer Science, FS, University of Dschang</institution>, <addr-line>Dschang</addr-line>, <country>Cameroon</country></aff>
<author-notes>
<corresp id="c001">&#x002A;Correspondence: Tabueu Fotso Laurent Cabrel, <email>laurent.tabueu@gmail.com</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>07</day>
<month>02</month>
<year>2022</year>
</pub-date>
<volume>1</volume>
<issue>1</issue>
<fpage>11</fpage>
<lpage>25</lpage>
<history>
<date date-type="received">
<day>29</day>
<month>12</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>19</day>
<month>01</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2022 Cabrel.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Cabrel</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by-nc-nd/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>Since automatic knowledge extraction must be performed in large databases, empirical studies are already showing an explosion in the search space for generalized patterns and even more so for frequent gradual patterns. In addition to this, we also observe a generation of a very large number of relevant extracted patterns. Being faced with this problem, many approaches have been developed, with the aim of reducing the size of the search space and the waiting time for detection, for end users, of relevant patterns. The objective is to make decisions or refine their analyses within a reasonable and realistic time frame. The gradual pattern mining algorithms common in large databases are CPU intensive. It is a question for us of proposing a new approach that allows an extraction of the maximum frequent gradual patterns based on a technique of partitioning datasets. The new technique leads to a new, more efficient hybrid algorithm called MSPGrite. The experiments carried out on several sets of known datasets justify the proposed approach.</p>
</abstract>
<kwd-group>
<kwd>pattern mining</kwd>
<kwd>pruning search space</kwd>
<kwd>maximal gradual support</kwd>
<kwd>lattice</kwd>
<kwd>adjacency matrix</kwd>
<kwd>partitioning</kwd>
</kwd-group>
<counts>
<fig-count count="19"/>
<table-count count="4"/>
<equation-count count="0"/>
<ref-count count="16"/>
<page-count count="15"/>
<word-count count="7598"/>
</counts>
</article-meta>
</front>
<body>
<sec id="S1" sec-type="intro">
<title>Introduction</title>
<p>Data mining is part of a process known as knowledge extraction (KDE), which appeared in the scientific community in the 1990s. It is a fast-growing research field aiming at exploiting the large quantities of data collected every day in various fields of computer science. This multidisciplinary field is at the crossroads of different domains, such as statistics, databases, big data, algorithms, and artificial intelligence. The type of data mining algorithm varies according to the type of data (binary, categorical, numerical, time series, spatial, etc.) of the dataset on which the algorithms will be applied, or the type of relationship between the patterns searched (sequence, co-variation, co-occurrence, etc.) as well as the level of complexity and semantics of the analyzed data (<xref ref-type="bibr" rid="B1">1</xref>). It is generally about finding co-occurrences or dependencies between attributes or items and relationships between objects or transactions in the dataset, unlike clustering, which is used to find relationships between objects. Gradual and maximal pattern mining, discussed in this article, is part of the search for frequent gradual dependencies between attributes of the dataset (<xref ref-type="bibr" rid="B2">2</xref>). Since automatic KDE has to be performed in large volume databases, empirical studies already show that at the level of generalized patterns, association rules, and frequent gradual patterns, one has to exponentially increase the size of the search space to be explored in order to extract useful knowledge. To make better decisions in real life and to refine the analysis of domain experts in a reasonable time, the authors have developed many algorithms to solve these problems of search space reduction and better CPU and memory performance. Nevertheless, very few works offer the final customers a reduced number of relevant patterns extracted. Thus, the technique of mining closed gradual patterns was developed (<xref ref-type="bibr" rid="B3">3</xref>). The goal is to extract a condensed representation of fuzzy gradual patterns based on the notion of closure of the Galois correspondence. It is used as a generator of rules and gradual patterns. We can cite other classes of algorithms based on multicore architectures that minimize the extraction time compared to their sequential versions. We find, in particular, the work of Negreverge with his Paraminer algorithm (<xref ref-type="bibr" rid="B4">4</xref>), the pglcm of Alexandre Termier. However, very few works are oriented on the extraction of frequent and maximal gradual patterns. The specificity of this work is the use of a hybrid approach of dataset partitioning and SGrite-based.</p>
<sec id="S1.SS1">
<title>Objectives</title>
<sec id="S1.SS1.SSS1">
<title>General objective: Extract frequent and maximum gradual patterns from big database</title>
<p>Specific objective 1: Our algorithm relies, on the one hand, a reduction of half in the first step of the research space; by using as a search space, the lattice has at least one positive term. Two simultaneous traverses of the above mentioned lattice are performed: an ascendant constructs the candidate sets of size 1 to a size k, <italic>k</italic> &#x003C; n, where n is the total number of items. During the browse, SGrite join is used, and the other descendants manage the maximum gradual candidates and frequencies. This objective is the exploitation of the lattice with at least one positive term as well as a two-way lag with a view to further reduce the operations of calculation of the support and thus the search space.</p>
</sec>
<sec id="S1.SS1.SSS2">
<title>Specific objective 2: Guarantee extraction in large databases</title>
<p>Observing the degree of memory saturation during gradual pattern discovery using Grite, SGrite, and Graank methods has involved preprocessing to reduce the dataset size (<xref ref-type="bibr" rid="B5">5</xref>) in the case of very correlate or dense data. This adaptation is necessary to carry out the extraction. However, this search remains partial and can lead to the loss of quality patterns; indeed, certain co-variations between the attributes considered in the original dataset and the rest of the dataset ignored remain unvalued. To partially solve this problem, we propose to process a search by partitioning the dataset, which will be described in section &#x201C;Presentation of the Hybrid Extraction Method for Maximal Gradual Patterns.&#x201D;</p>
</sec>
</sec>
</sec>
<sec id="S2">
<title>Literature review</title>
<sec id="S2.SS1">
<title>Definitions</title>
<p>Definition 1. Gradual item (<xref ref-type="bibr" rid="B1">1</xref>, <xref ref-type="bibr" rid="B5">5</xref>&#x2013;<xref ref-type="bibr" rid="B8">8</xref>): It is an attribute. A provided with a comparison operator &#x002A; &#x2208; {&#x2264;, &#x2265;, &#x003C;, &#x003E;} that reflects the direction of variation of the values for this attribute A. It is noted as A&#x002A;. If &#x002A; is equal to &#x2265; (resp. &#x2264;), then A&#x002A; captures an increasing (resp. decreasing) variation of the values of A.</p>
<p>For example, A<sup>&#x2265;</sup>, A<sup>&#x2264;</sup>, and S<sup>&#x2265;</sup> are gradual items induced by <xref ref-type="table" rid="T1">Table 1</xref>. They are interpreted as &#x201C;the more age increases,&#x201D; &#x201C;the more age decreases,&#x201D; and &#x201C;the more salary increases.&#x201D;</p>
<table-wrap position="float" id="T1">
<label>TABLE 1</label>
<caption><p>Salary data set D.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">id</td>
<td valign="top" align="center">Age (A)</td>
<td valign="top" align="center">Salary (S)</td>
<td valign="top" align="center">Vehicle (V)</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">o1</td>
<td valign="top" align="center">19</td>
<td valign="top" align="center">1199</td>
<td valign="top" align="center">3</td>
</tr>
<tr>
<td valign="top" align="left">o2</td>
<td valign="top" align="center">27</td>
<td valign="top" align="center">1849</td>
<td valign="top" align="center">3</td>
</tr>
<tr>
<td valign="top" align="left">o3</td>
<td valign="top" align="center">23</td>
<td valign="top" align="center">1199</td>
<td valign="top" align="center">2</td>
</tr>
<tr>
<td valign="top" align="left">o4</td>
<td valign="top" align="center">34</td>
<td valign="top" align="center">2199</td>
<td valign="top" align="center">3</td>
</tr>
<tr>
<td valign="top" align="left">o5</td>
<td valign="top" align="center">29</td>
<td valign="top" align="center">1999</td>
<td valign="top" align="center">3</td>
</tr>
<tr>
<td valign="top" align="left">o6</td>
<td valign="top" align="center">39</td>
<td valign="top" align="center">3399</td>
<td valign="top" align="center">3</td>
</tr>
<tr>
<td valign="top" align="left">o7</td>
<td valign="top" align="center">51</td>
<td valign="top" align="center">3399</td>
<td valign="top" align="center">4</td>
</tr>
<tr>
<td valign="top" align="left">o8</td>
<td valign="top" align="center">40</td>
<td valign="top" align="center">4999</td>
<td valign="top" align="center">4</td>
</tr>
</tbody>
</table></table-wrap>
<p>Definition 2. Gradual itemset (<xref ref-type="bibr" rid="B5">5</xref>&#x2013;<xref ref-type="bibr" rid="B10">10</xref>): A gradual itemset, denoted by {(A<sub><italic>i</italic></sub>,&#x002A;i),<italic>i</italic> = 1&#x2026;k} or {A<sub><italic>i</italic></sub><sup>&#x002A;i</sup>, <italic>i</italic> = 1&#x2026;k}, is a set of gradual items that expresses a co-variation of the considered items. This set is interpreted semantically as a conjunction of gradual items.</p>
<p>For example, the gradual itemset A<sup>&#x003E;</sup>S<sup>&#x003C;</sup> deduced of <xref ref-type="table" rid="T1">Table 1</xref> means that &#x201C;the more age, the less salary.&#x201D;</p>
<p>Definition 3. Complementary gradual pattern (<xref ref-type="bibr" rid="B1">1</xref>, <xref ref-type="bibr" rid="B5">5</xref>, <xref ref-type="bibr" rid="B9">9</xref>): If a gradual pattern <italic>M</italic> = {(A<sub><italic>i</italic></sub><sup>&#x002A;i</sup>), <italic>i</italic> = 1&#x2026;k}, then its complementary gradual pattern of the same size is denoted c(M). It is defined by c(M) = {(A<sub><italic>i</italic></sub><italic><sup>c(&#x002A;i)</sup></italic>), <italic>i</italic> = 1&#x2026;k}, where c(&#x002A;i) is the complement of the comparison operator &#x002A;i.</p>
<p>Note: In the previous works (<xref ref-type="bibr" rid="B1">1</xref>, <xref ref-type="bibr" rid="B5">5</xref>), c(&#x2264;) = &#x2265;, c(&#x2265;) = &#x2264;, c(&#x003C;) = &#x003E;, c(&#x003E;) = &#x003C;.</p>
<p>In <xref ref-type="table" rid="T1">Table 1</xref>, for example, two gradual patterns A<sup>&#x003E;</sup> and A<sup>&#x003E;</sup>S<sup>&#x003C;</sup> are considered; their complementary is the gradual patterns A<sup>&#x003C;</sup> and A<sup>&#x003C;</sup>S<sup>&#x003E;</sup>.</p>
<p>Definition 4. Inclusion of gradual patterns: The gradual pattern X is included in the gradual pattern Y, noted as X &#x2286; Y, if all the gradual items of X are also present in Y.</p>
<p>For example, from <xref ref-type="table" rid="T1">Table 1</xref>, the gradual pattern A<sup>&#x003E;</sup>S<sup>&#x003E;</sup> is included in the gradual patterns A<sup>&#x003E;</sup>S<sup>&#x003E;</sup>V<sup>&#x003E;</sup> and A<sup>&#x003E;</sup>S<sup>&#x003E;</sup>V<sup>&#x003C;</sup>.</p>
<p>From definitions 3 and 4, we can deduce the two properties allowing a significant pruning of the search space. The first property is the equality property of gradual support (<xref ref-type="bibr" rid="B1">1</xref>, <xref ref-type="bibr" rid="B5">5</xref>, <xref ref-type="bibr" rid="B11">11</xref>) and the second is the anti-monotonicity property of gradual support (<xref ref-type="bibr" rid="B1">1</xref>, <xref ref-type="bibr" rid="B5">5</xref>, <xref ref-type="bibr" rid="B10">10</xref>, <xref ref-type="bibr" rid="B11">11</xref>).</p>
<p>Definition 5. Lattice of gradual patterns (<xref ref-type="bibr" rid="B5">5</xref>). A lattice of gradual patterns is a lattice induced by the set of gradual patterns provided with the inclusion relation. The set of nodes of the lattice is the set of gradual patterns. An arc that goes from a gradual pattern A to a gradual pattern B reflects the inclusion of A in B.</p>
<p>Definition 6. Lattice of gradual patterns with first-term positive: It is a sub lattice of the lattice of gradual patterns that only has as a component the gradual patterns of which at least the first gradual item of each component is positive. They are noted on the form <inline-formula><mml:math id="INEQ1"><mml:msubsup><mml:mi>A</mml:mi><mml:mn>1</mml:mn><mml:mo>&#x2265;</mml:mo></mml:msubsup></mml:math></inline-formula>{A<sub><italic>i</italic></sub><sup>&#x002A;i</sup> }, <italic>i</italic> = 2&#x2026;k.</p>
<p>The lattice of gradual patterns is the search space of frequent gradual patterns which is halved by the lattice with the first positive term. To illustrate our point, let us take, for example, <xref ref-type="fig" rid="F1">Figures 1</xref>, <xref ref-type="fig" rid="F2">2</xref>.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption><p>Lattice of gradual patterns obtained with the items of <xref ref-type="table" rid="T1">Table 1</xref> (<xref ref-type="bibr" rid="B5">5</xref>, <xref ref-type="bibr" rid="B11">11</xref>).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijcs-2022-03-g001.tif"/>
</fig>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption><p>Lattice of gradual patterns with first-term positive obtained from the items of <xref ref-type="table" rid="T1">Table 1</xref>.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijcs-2022-03-g002.tif"/>
</fig>
</sec>
<sec id="S2.SS2">
<title>Gradual pattern mining approaches</title>
<p>The linear regression technique described by Huller-Meier (<xref ref-type="bibr" rid="B12">12</xref>) allows for the extraction of gradual dependences with more support and confidence than the user-specified threshold. This method only takes into account fuzzy data and rules with premise and conclusion numbers less than or equal to two. However, the T-Norm idea, which is part of this technique, allows us to transcend the size restriction of the premise and the conclusion of the rules. The weight of a gradual pattern, also known as Gradual Support (SG) in the method of Berzal et al. (<xref ref-type="bibr" rid="B13">13</xref>), is equal to the number of pairs of distinct objects that verify the order imposed by the pattern divided by the P total number of couples of different objects in the database. Thus, <inline-formula><mml:math id="INEQ2"><mml:mpadded width="+2.8pt"><mml:mfrac><mml:mrow><mml:msub><mml:mi mathvariant="normal">&#x03A3;</mml:mi><mml:mrow><mml:mi>o</mml:mi><mml:mo>,</mml:mo><mml:msup><mml:mi>o</mml:mi><mml:mo>&#x2032;</mml:mo></mml:msup></mml:mrow></mml:msub><mml:mo rspace="4.2pt">&#x2208;</mml:mo><mml:mpadded width="+1.7pt"><mml:mi>D</mml:mi></mml:mpadded><mml:mpadded width="+1.7pt"><mml:mi>x</mml:mi></mml:mpadded><mml:mi>D</mml:mi><mml:mo stretchy="false">|</mml:mo><mml:mi>o</mml:mi><mml:mo rspace="4.2pt">&#x227A;</mml:mo><mml:msup><mml:mi>M</mml:mi><mml:msup><mml:mi>o</mml:mi><mml:mo>&#x2032;</mml:mo></mml:msup></mml:msup><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mi>D</mml:mi><mml:mo>|</mml:mo></mml:mrow><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mi>D</mml:mi><mml:mo rspace="4.2pt">|</mml:mo></mml:mrow><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mfrac></mml:mpadded></mml:math></inline-formula>, where M is a gradual pattern and D is the database.</p>
<p>Laurent et al. (<xref ref-type="bibr" rid="B10">10</xref>) expand on the technique of Berzal et al. (<xref ref-type="bibr" rid="B13">13</xref>), which utilizes the fact that if a pair of objects (o, o&#x2019;) validates the order established by a gradual pattern, the pair (o&#x2019;, o) does not.</p>
<p>The gradual support of a gradual pattern M is equal to the length of a maximal path associated with M divided by the entire number of datasets in the so-called maximum pathway approach (<xref ref-type="bibr" rid="B9">9</xref>, <xref ref-type="bibr" rid="B11">11</xref>, <xref ref-type="bibr" rid="B14">14</xref>, <xref ref-type="bibr" rid="B15">15</xref>). In this method, we have <inline-formula><mml:math id="INEQ3"><mml:mrow><mml:mrow><mml:mi>S</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mi>G</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>M</mml:mi><mml:mo rspace="4.2pt" stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mpadded width="+3.3pt"><mml:mfrac><mml:mrow><mml:msub><mml:mi>max</mml:mi><mml:mrow><mml:mpadded width="+1.7pt"><mml:mi>D</mml:mi></mml:mpadded><mml:mo rspace="5.8pt">&#x2208;</mml:mo><mml:mrow><mml:mpadded width="+1.7pt"><mml:mi>L</mml:mi></mml:mpadded><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>M</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:msub><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:mi>D</mml:mi><mml:mo>|</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mi>D</mml:mi><mml:mo>|</mml:mo></mml:mrow></mml:mfrac></mml:mpadded></mml:mrow></mml:math></inline-formula>.</p>
<p>Grite serves as the foundation for the SGrite (<xref ref-type="bibr" rid="B5">5</xref>, <xref ref-type="bibr" rid="B16">16</xref>) methodology. It prunes the search space by utilizing the support&#x2019;s anti-monotonicity and complementary patterns properties. The lattice with the first-term positive utilized reduces the search space by half. Another difference between SGrite and Grite is that SGrite only takes one sweep of the dependency graph to compute gradual support, whereas Grite requires two sweeps. She employs two types of gradual support computing algorithms, each of which does a single sweep of the precedence graph.</p>
</sec>
<sec id="S2.SS3">
<title>The SGrite algorithm</title>
<p>The two main activities in the SGrite algorithm are the creation of candidates and the computation of the support. As it is run for each candidate, the support calculation is the most requested and CPU-intensive procedure. The SGrite algorithm is built on the following notions. In the definitions that follow, O is the set of objects, and o and o&#x2019; are objects.</p>
<p>Definition 7. Adjacency matrix: The adjacency matrix of a gradual pattern M is a bitwise matrix that assigns the values of 1 to every pair of objects (o, o&#x2019;) if the pair of objects satisfies the pattern M&#x2019;s order and 0 otherwise.</p>
<p>A pattern&#x2019;s adjacency matrix generates a dependence graph with nodes that are objects, and nonzero adjacency matrix inputs reflect the dependencies between pairs of nodes.</p>
<p>Definition 8. Father node and son node: Given a pattern M&#x2019;s adjacency matrix AdjM, for AdjM[o, o&#x2019;] = 1, we translate the fact that o is the father of o&#x2019; and o&#x2019; is a child of o.</p>
<p>Definition 9. Isolated node: It is a node that is not linked to another node, i.e., it does not have a father or a son. Considering a pattern M&#x2019;s adjacency matrix AdjM, the set of isolated nodes is defined by: {o &#x2208; O| &#x2200;<italic><sup>o&#x2019;</sup></italic> <italic>&#x2208; O, Adj<sub><italic>M</italic></sub></italic>[o, <italic>o&#x2032;</italic>] = 0 &#x2227; <italic>Adj</italic><sub><italic>M</italic></sub> [o&#x2032;, o] = 0}.</p>
<p>Definition 10. Root: It is a node that has no parent but is connected to all the other nodes. For the model M&#x2019;s adjacency matrix Adj<sub><italic>M</italic></sub>, the root node-set is formally defined by {o &#x2208; O| &#x2200;<italic><sup>o&#x2019;</sup></italic> <italic>&#x2208; O, Adj<sub><italic>M</italic></sub></italic>[o, <italic>o&#x2032;</italic>] = 1 &#x2227; <italic>Adj</italic><sub><italic>M</italic></sub> [o&#x2032;, o] = 0}.</p>
<p>Definition 11. Leaf: It is a node that does not have a son but is not isolated. Given a pattern M&#x2019;s adjacency matrix AdjM, the leaves node-set is formally defined by {o &#x2208; O| &#x2200;<italic><sup>o&#x2019;</sup></italic> <italic>&#x2208; O, Adj<sub><italic>M</italic></sub></italic>[o, <italic>o&#x2019;</italic>] = 0 &#x2203;o&#x2033;&#x2208; O&#x2223;Adj<sub><italic>M</italic></sub> [o&#x2033;, o] = 1}.</p>
<p>The SGrite algorithm accepts an adjacency matrix, an object node &#x2208; O, and a vector of size | O| whose indexes are the objects of O as input. Before running each algorithm, we assume that &#x2200;o &#x2208; O, Memory[o] = &#x2212;1. Memory[o] holds the current maximum distance between o and any leaf throughout the execution of each algorithm. Memory[o] holds the ultimate maximum distance between o and any leaf at the end of each algorithm&#x2019;s execution. The first class of algorithms updates the values of a node&#x2019;s parents whenever the value of this node changes. The second class of algorithms uses just the final value of a node to update the values of his parents&#x2019; nodes. We have four different variants of the SGrite method, namely, SGOpt, SG1, SGB1, and SGB2 (<xref ref-type="bibr" rid="B5">5</xref>).</p>
</sec>
</sec>
<sec id="S3">
<title>Methodology</title>
<p>To achieve our goals, several properties must be considered: (P1) anti-monotony support, (P2) complementary gradual patterns, and (P3) use of the frequency of sub-patterns of a frequent gradual maximum for pruning. The role of P1 and P2 is already known and shown in SGrite for the upward traversal for the generation of frequent gradual patterns. P3 makes it possible in the downward traversal to ignore frequent gradual sub-patterns of a maximal gradual pattern that I have to determine in advance. At the same time, during the extraction process, in the downward path of the lattice with at least one positive term, we construct the maximal progressive candidates, which belong to the lattice, with at least one positive term, hence, the guarantee of the conservation of the properties of the optimal search space. In addition, by browsing down, we can reduce the search space further by chain filtering. Thus, we use to prune, on the one hand, the set of infrequent gradual sub-patterns which is applied to the maximal gradual candidates, and, on the other hand, the sub-patterns of a frequent and maximal gradual pattern discovered beforehand starting from a set of the candidate gradual maxima of larger size during the procedure, and finally, the last pruning is done by calculating the gradual support to check the frequency of the candidates and their relevance. To complete the extraction process, there are two stop conditions. Either the current set of candidates is exhausted during the upward traverse, or the maximum number of candidates is exhausted first and the search is also terminated.</p>
<sec id="S3.SS1">
<title>Hypotheses</title>
<list list-type="simple">
<list-item>
<label>1.</label>
<p>Global time reduction of the support computation (the one brought by SGrite, omission of the computation of the gradual support of the frequent sub-patterns, and the reduction of the depths of the dependency graphs associated with the maximal candidate pattern), following the computation of the fusion of the n gradual items composing the candidate in question.</p>
</list-item>
<list-item>
<label>2.</label>
<p>Reduction of the search space.</p>
</list-item>
<list-item>
<label>3.</label>
<p>The reduction in the number of gradual knowledge produced, but with the possibility of listing them exhaustively.</p>
</list-item>
<list-item>
<label>4.</label>
<p>The choice of the size of the different partitions. To begin, we take 2 partitions, and we also take partition 1 the biggest possible extractable by SGrite, and then partition 2 is a supplement of items of the considered dataset.</p>
</list-item>
</list>
</sec>
</sec>
<sec id="S4">
<title>Presentation of the hybrid extraction method for maximal gradual patterns</title>
<p>In this section, we will explain the general operating principles of the MPSGrite method. Section &#x201C;Partition Working Principle&#x201D; explains how the partitioning method works. Section &#x201C;Principle of Finding Maximum Gradual Patterns&#x201D; presents the operation of the algorithm for finding maximum frequent gradual patterns. The notations used in this part are summarized in <xref ref-type="table" rid="T4">Table 4</xref>. It is important to specify that what motivates this algorithm is its relevance. Indeed, the algorithm is oriented toward a different concept from that of &#x201C;SGrite&#x201D; and its extensions; partition will have the particularity of being much more efficient on very large databases, like the current OLTP<sup><xref ref-type="fn" rid="footnote1">1</xref></sup> systems, hence, the importance of his study.</p>
<sec id="S4.SS1">
<title>Partition working principle</title>
<p>To simplify the description, we will limit 2 partitions and consider a dataset D, which has n items. D is partitioned into two datasets D1 of size n1 and D2 of size n2, such that <italic>n</italic> = n1 + n2. D1 represents partition 1 of the database containing the n<sub>1</sub> first items, while 2 is the second partition containing the n<sub>2</sub> last items of D.</p>
<p>Definition 12. Partition of a database: It is a part of D which has the same number of transactions as D and which takes a contiguous subset of the items from the dataset D representing the items taken into account in the score. Being denoted by D and <italic>I</italic> = {i<sub><italic>k</italic></sub>} <italic>k</italic> = 1&#x2026;n the items, n<sub><italic>k</italic></sub> &#x003C; n, if D<sub><italic>k</italic></sub> the kth partition that starts with item number l/l = l &#x003C; n, we have D<sub><italic>k</italic></sub> = (O,I) with I &#x2286; I and <italic>I</italic> = {i<sub><italic>k</italic></sub>}<italic><sub><italic>k</italic></sub></italic> <sub>=</sub> <sub><italic>l&#x2026;l</italic></sub> <sub>+</sub> <sub><italic>nk</italic>&#x2013;1</sub>.</p>
<p>Definition 13. Independence of two partitions of a database: Let D<sub>1</sub> = (O, I<sub>1</sub>) and D<sub>2</sub> = (O, I<sub>2</sub>) be two partitions of a dataset D = (O, I) / I1 &#x2282; I, I2 &#x2282; I. They are independent iff I<sub>1</sub> &#x2229; I<sub>2</sub> = &#x00D8;.</p>
<p>Example 14. Illustration of partition</p>
<p>Let I = {A, S, V} be the set of attributes of the salary dataset (see <xref ref-type="table" rid="T1">Table 1</xref>) where A is the age attribute, S is the salary, and V is the vehicle location number attribute.</p>
<p>For example D1 = (O,{A, S}) and D2 = (O,{V}) are two independent partitions of dataset of <xref ref-type="table" rid="T1">Table 1</xref>.</p>
<p>Order defined on the sets.</p>
<p>Definition 15. Order on items: The order relation on the set of items of a dataset D, denoted by &#x003C;<sub><italic>I</italic></sub>, denotes the natural order relation of appearance of items in D.</p>
<p>For example, in the dataset of <xref ref-type="table" rid="T1">Table 1</xref>, we will have an order between the items A &#x003C; I S &#x003C; I V.</p>
<p>Definition 16. Ordered gradual itemset: An ordered gradual itemset is a gradual itemset that respects the set of items that constitutes the order defined by its position in the set of items for the dataset D = (O, I) considered. Being denoted by <italic>M</italic> = {A<sub><italic>i</italic></sub><sup>&#x002A;i</sup> }<sub><italic>i</italic>&#x2208;{1,2,&#x2026;,n}</sub> a gradual k-pattern, it is ordered iff &#x2200;j, 1 = j &#x003C; k, where j represents the index of appearance of the item in the pattern M, and we have A<sub><italic>j</italic></sub> &#x003C; <sub><italic>I</italic></sub> A<sub><italic>j</italic>+1</sub>.</p>
<p>Definition 17. Gradual positive ordered itemset: A positive ordered gradual itemset is an ordered gradual itemset so atleast the first term has increasing variation.</p>
<p>For example, the gradual itemsets A <sup>&#x003E;</sup> S <sup>&#x003C;</sup> , A <sup>&#x003C;</sup> S <sup>&#x003C;</sup> , A <sup>&#x003E;</sup> V <sup>&#x003C;</sup> , and A <sup>&#x003C;</sup> S <sup>&#x003C;</sup> V <sup>&#x003C;</sup> are ordered gradual itemsets. In con- trast, S <sup>&#x003C;</sup> A <sup>&#x003E;</sup> is not an ordered gradual itemset, even if it has the same meaning and the same gradual support as A <sup>&#x003E;</sup> S <sup>&#x003C;</sup> ordered.</p>
<p>Proposition 18. Total order on two gradual items: When doing a pairwise comparison of gradual items, for example, <inline-formula><mml:math id="INEQ6"><mml:msubsup><mml:mi>A</mml:mi><mml:mn>1</mml:mn><mml:mn>1</mml:mn></mml:msubsup></mml:math></inline-formula> and <inline-formula><mml:math id="INEQ7"><mml:msubsup><mml:mi>A</mml:mi><mml:mn>2</mml:mn><mml:mn>2</mml:mn></mml:msubsup></mml:math></inline-formula>, if A<sub>1</sub> &#x003C; <sub><italic>I</italic></sub> A<sub>2</sub>, then <inline-formula><mml:math id="INEQ8"><mml:msubsup><mml:mi>A</mml:mi><mml:mn>1</mml:mn><mml:mn>1</mml:mn></mml:msubsup></mml:math></inline-formula> <inline-formula><mml:math id="INEQ9"><mml:msubsup><mml:mo>&#x003C;</mml:mo><mml:mi>I</mml:mi><mml:mi>m</mml:mi></mml:msubsup></mml:math></inline-formula><inline-formula><mml:math id="INEQ10"><mml:msubsup><mml:mi>A</mml:mi><mml:mn>2</mml:mn><mml:mn>2</mml:mn></mml:msubsup></mml:math></inline-formula> and vice versa, but in the case of A<sub>1</sub> is equal to A<sub>2</sub>, the order is induced by the variation associated with each gradual item as follows: if &#x002A;1 = &#x002A;2, then <inline-formula><mml:math id="INEQ11"><mml:msubsup><mml:mi>A</mml:mi><mml:mn>1</mml:mn><mml:mn>1</mml:mn></mml:msubsup></mml:math></inline-formula> <inline-formula><mml:math id="INEQ12"><mml:msubsup><mml:mo>&#x003C;</mml:mo><mml:mi>I</mml:mi><mml:mi>m</mml:mi></mml:msubsup></mml:math></inline-formula><inline-formula><mml:math id="INEQ13"><mml:msubsup><mml:mi>A</mml:mi><mml:mn>2</mml:mn><mml:mn>2</mml:mn></mml:msubsup></mml:math></inline-formula>; if, in contrast, &#x002A;1 = &#x003C; and &#x002A; 2 = &#x003E; , then <inline-formula><mml:math id="INEQ14"><mml:msubsup><mml:mi>A</mml:mi><mml:mn>1</mml:mn><mml:mn>1</mml:mn></mml:msubsup></mml:math></inline-formula> <inline-formula><mml:math id="INEQ15"><mml:msubsup><mml:mo>&#x003C;</mml:mo><mml:mi>I</mml:mi><mml:mi>m</mml:mi></mml:msubsup></mml:math></inline-formula><inline-formula><mml:math id="INEQ16"><mml:msubsup><mml:mi>A</mml:mi><mml:mn>2</mml:mn><mml:mn>2</mml:mn></mml:msubsup></mml:math></inline-formula>, so if &#x002A;1 = &#x003E; and &#x002A; 2 = &#x003C; , then <inline-formula><mml:math id="INEQ17"><mml:msubsup><mml:mi>A</mml:mi><mml:mn>2</mml:mn><mml:mn>2</mml:mn></mml:msubsup></mml:math></inline-formula> <inline-formula><mml:math id="INEQ18"><mml:mpadded width="+3.3pt"><mml:msubsup><mml:mo>&#x003C;</mml:mo><mml:mi>I</mml:mi><mml:mi>m</mml:mi></mml:msubsup></mml:mpadded></mml:math></inline-formula><inline-formula><mml:math id="INEQ19"><mml:mpadded width="+3.3pt"><mml:msubsup><mml:mi>A</mml:mi><mml:mn>1</mml:mn><mml:mn>1</mml:mn></mml:msubsup></mml:mpadded></mml:math></inline-formula>.</p>
<p>For example, in <xref ref-type="table" rid="T1">Table 1</xref>, S <sup>&#x003C;</sup> <inline-formula><mml:math id="INEQ20"><mml:msubsup><mml:mo>&#x003C;</mml:mo><mml:mi>I</mml:mi><mml:mi>m</mml:mi></mml:msubsup></mml:math></inline-formula> S <sup>&#x003E;</sup>.</p>
<p>Definition 19. Order on two gradual ordered patterns: Let M<sub>1</sub> = { A1<sub><italic>i</italic></sub><sup>&#x002A;i</sup> }<italic><sub><italic>i</italic></sub></italic> <sub>=</sub> <sub>1&#x2026;k1</sub>, M<sub>2</sub> = { A2<sub><italic>i</italic></sub><sup>&#x002A;i</sup> }<italic><sub><italic>i</italic></sub></italic> <sub>=</sub> <sub>1&#x2026;k2</sub> such that k<sub>1</sub> = k<sub>2</sub> two ordered gradual itemsets (see Definition 16). They are ordered iff &#x2200;k = 1&#x2026;<italic>min</italic>(k<sub>1</sub>, k<sub>2</sub>),A1<sub><italic>k</italic></sub> &#x003C; <sub><italic>I</italic></sub> A2<sub><italic>k</italic></sub> or A1<sub><italic>k</italic></sub><sup>&#x002A;k</sup> &#x003C; <italic><sup>m</sup></italic> A2<sub><italic>k</italic></sub><sup>&#x002A;k</sup> (see Proposition 18). Note this order relation between ordered gradual patterns &#x003C; <italic><sup>m</sup></italic><sub><italic>I</italic></sub> wethenhaveM<sub>1</sub> <inline-formula><mml:math id="INEQ21"><mml:mpadded width="+3.3pt"><mml:msubsup><mml:mo>&#x003C;</mml:mo><mml:mi>I</mml:mi><mml:mi>m</mml:mi></mml:msubsup></mml:mpadded></mml:math></inline-formula>M<sub>2</sub>.</p>
<p>For example, for the gradual itemsets A <sup>&#x003E;</sup> S <sup>&#x003C;</sup>, A <sup>&#x003E;</sup> V <sup>&#x003C;</sup> respects the following A <sup>&#x003E;</sup> S <sup>&#x003C;</sup> <inline-formula><mml:math id="INEQ22"><mml:msubsup><mml:mo>&#x003C;</mml:mo><mml:mi>I</mml:mi><mml:mi>m</mml:mi></mml:msubsup></mml:math></inline-formula>A <sup>&#x003E;</sup> V <sup>&#x003C;</sup>, which means that the gradual pattern A <sup>&#x003E;</sup> S <sup>&#x003C;</sup> is less than A <sup>&#x003E;</sup>V <sup>&#x003C;</sup> following this order relation.</p>
<p>Definition 20. Set of ordered gradual patterns: It is a set of gradual patterns ordered by the relation <inline-formula><mml:math id="INEQ23"><mml:msubsup><mml:mo>&#x003C;</mml:mo><mml:mi>I</mml:mi><mml:mi>m</mml:mi></mml:msubsup></mml:math></inline-formula>. Being denoted by L<sub><italic>MgO</italic></sub> = {L<italic><sup>i</sup></italic><sub><italic>mgo</italic></sub>}<sub><italic>i</italic> = 1&#x2026;n</sub> the set of sets of ordered gradual patterns &#x2200;L<italic><sup>i</sup></italic><sub><italic>mgo</italic></sub> &#x2208; L<sub><italic>MgO</italic></sub> such that L<italic><sup>i</sup></italic><sub><italic>mgo</italic></sub> = {m<sub><italic>j</italic></sub> }<sub><italic>j</italic> = 1&#x2026;k</sub>, then we have &#x2200;m<sub><italic>j</italic></sub>, m<sub><italic>j</italic>+1</sub> e L<italic><sup>i</sup></italic><sub><italic>mgo</italic></sub>, m<sub><italic>j</italic></sub> <inline-formula><mml:math id="INEQ24"><mml:msubsup><mml:mo>&#x003C;</mml:mo><mml:mi>I</mml:mi><mml:mi>m</mml:mi></mml:msubsup></mml:math></inline-formula> m<sub><italic>j</italic>+1</sub>, with <italic>j</italic> = 1&#x2026;k &#x2212;1. Each L<italic><sup>i</sup></italic><sub><italic>mgo</italic></sub> is a set of ordered gradual patterns.</p>
</sec>
<sec id="S4.SS2">
<title>Principle</title>
<p>The search by partitioning is based on the principle of SGrite, that is to say, one of these optimal variants SGOpt or SG1 (<xref ref-type="bibr" rid="B5">5</xref>) on each of the independent partitions considered. Once all the sets of frequent patterns of each of the partitions have been determined and organized by level, according to the sizes of patterns, we must initially merge the gradual patterns of the same level of each partition. The next step is to generate the missing potential candidates by the method described below. This process of determining the frequencies of the whole database goes from a frequent item set of size 1 to the maximum possible size. The steps below always take place in the search space for the lattice at the first positive term. We can summarize the approach in 4 steps:</p>
<list list-type="simple">
<list-item>
<label>1.</label>
<p>Step 1: determination of the frequent and infrequent gradual patterns of each partition;</p>
</list-item>
<list-item>
<label>2.</label>
<p>Merging, level by level, of the gradual itemsets of all the partitions, on the one hand, the frequent ones and, on the other hand, the infrequent ones;</p>
</list-item>
<list-item>
<label>3.</label>
<p>Iterative and pairwise generation of candidate patterns, based on the gradual patterns determined in step 2 as follows:</p>
<list list-type="simple">
<list-item>
<label>(a)</label>
<p>Choose two levels to start the generation of gradual candidates of size k + 1 from the frequency of size k previously known. The start level and next level are, respectively, 1 and 2;</p>
</list-item>
<list-item>
<label>(b)</label>
<p>For each candidate c k = {A<sub>1</sub><sup>&#x002A;i</sup><sub><italic>i</italic> = 1&#x2026;k</sub>}, frequent, of the current ordered set of the gradual patterns consider&#x2019;s of size k, extract its prefix, Prefixk-1 = { A<sub>1</sub><sup>&#x002A;i</sup> i = 1&#x2026;k&#x2212;1 }.</p>
</list-item>
<list-item>
<label>(c)</label>
<p>From the obtained prefix, construct its adjacency matrix, and find the bounds in the set of ordered frequent and infrequent gradual patterns of size k having as prefix, Prefix<sub><italic>k</italic>&#x2013;1</sub>;</p>
</list-item>
<list-item>
<label>(d)</label>
<p>Then, retrieve the value of maximum attribute, noted as max. It is the gradual item in position k, resulting from the two sets of size k (frequent and infrequent) which has the greatest value;</p>
</list-item>
<list-item>
<label>(e)</label>
<p>Generate the suffixes that are used for the merge with Prefix<sub><italic>k</italic>&#x2013;1</sub>, and noted IGc = {(max + i) <sup>&#x003C;</sup>, (max + i) <sup>&#x003E;</sup>}<sub><italic>i</italic></sub> <sub>=</sub> <sub>1&#x2026;(| I| &#x2013;max+1)</sub> of gradual items.</p>
</list-item>
<list-item>
<label>(f)</label>
<p>Determine support for new candidate&#x2019;s c new = Prefix<sub><italic>fc</italic>&#x2013;1</sub> &#x222A; e, &#x2200;e &#x2208; IGc. The frequents are added to the frequent gradual k-patterns of level k and the infrequent to the in frequent gradual k-motfs.</p>
</list-item>
<list-item>
<label>(g)</label>
<p>Repeat the process 3-(a) to 3-(f) until the set of frequently ordered graduals of size k is exhausted.</p>
</list-item>
</list>
</list-item>
<list-item>
<label>4.</label>
<p>Repeat steps (2) and (3) until the current candidate se is empty.</p>
</list-item>
</list>
</sec>
<sec id="S4.SS3">
<title>Illustration of partition</title>
<p>Example 21. Step 1: collection of frequent gradual pattern by each partition.</p>
<table-wrap position="float" id="T5">
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<tbody>
<tr>
<td valign="top" align="left">frequent gradual patterns of partition 1, <xref ref-type="table" rid="T2">Table 2</xref></td>
<td valign="top" align="left">frequent gradual patterns of partition 2, <xref ref-type="table" rid="T3">Table 3</xref></td>
</tr>
<tr>
<td valign="top" align="center" colspan="2"><hr/></td>
</tr>
<tr>
<td valign="top" align="left"><bold>level 2</bold></td>
<td valign="top" align="left"><bold>level 2</bold></td>
</tr>
<tr>
<td valign="top" align="left">A <sup>&#x003E;</sup>S <sup>&#x003C;</sup> SG: 25.0%; A <sup>&#x003E;</sup>S <sup>&#x003C;</sup>SG: 75.0%</td>
<td valign="top" align="left"/></tr>
<tr>
<td valign="top" align="left"><bold>level 1</bold></td>
<td valign="top" align="left"><bold>level 1</bold></td>
</tr>
<tr>
<td valign="top" align="left">A <sup>&#x003C;</sup>SG: 100.0%; A <sup>&#x003E;</sup>SG: 100.0%;S <sup>&#x003C;</sup>SG:75.0%;S <sup>&#x003E;</sup>SG: 75.0%</td>
<td valign="top" align="left">V <sup>&#x003C;</sup> SG: 37.5%; V <sup>&#x003E;</sup> SG: 37.5%</td>
</tr>
</tbody>
</table></table-wrap>
<p>Example 22. Step 2: merge the gradual patterns from partitions 1 and 2 (see <xref ref-type="table" rid="T2">Tables 2</xref>, <xref ref-type="table" rid="T3">3</xref>).</p>
<table-wrap position="float" id="T2">
<label>TABLE 2</label>
<caption><p>Partition D<sub>1</sub> of D.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">id</td>
<td valign="top" align="center">Age(A)</td>
<td valign="top" align="center">Salary(S)</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">o1</td>
<td valign="top" align="center">19</td>
<td valign="top" align="center">1199</td>
</tr>
<tr>
<td valign="top" align="left">o2</td>
<td valign="top" align="center">27</td>
<td valign="top" align="center">1849</td>
</tr>
<tr>
<td valign="top" align="left">o3</td>
<td valign="top" align="center">23</td>
<td valign="top" align="center">1199</td>
</tr>
<tr>
<td valign="top" align="left">o4</td>
<td valign="top" align="center">34</td>
<td valign="top" align="center">2199</td>
</tr>
<tr>
<td valign="top" align="left">o5</td>
<td valign="top" align="center">29</td>
<td valign="top" align="center">1999</td>
</tr>
<tr>
<td valign="top" align="left">o6</td>
<td valign="top" align="center">39</td>
<td valign="top" align="center">3399</td>
</tr>
<tr>
<td valign="top" align="left">o7</td>
<td valign="top" align="center">51</td>
<td valign="top" align="center">3399</td>
</tr>
<tr>
<td valign="top" align="left">o8</td>
<td valign="top" align="center">40</td>
<td valign="top" align="center">4999</td>
</tr>
</tbody>
</table></table-wrap>
<table-wrap position="float" id="T3">
<label>TABLE 3</label>
<caption><p>Partition D<sub>2</sub> of D.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">id</td>
<td valign="top" align="center">Vehicle (V)</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">o1</td>
<td valign="top" align="center">3</td>
</tr>
<tr>
<td valign="top" align="left">o2</td>
<td valign="top" align="center">3</td>
</tr>
<tr>
<td valign="top" align="left">o3</td>
<td valign="top" align="center">2</td>
</tr>
<tr>
<td valign="top" align="left">o4</td>
<td valign="top" align="center">3</td>
</tr>
<tr>
<td valign="top" align="left">o5</td>
<td valign="top" align="center">3</td>
</tr>
<tr>
<td valign="top" align="left">o6</td>
<td valign="top" align="center">3</td>
</tr>
<tr>
<td valign="top" align="left">o7</td>
<td valign="top" align="center">4</td>
</tr>
<tr>
<td valign="top" align="left">o8</td>
<td valign="top" align="center">4</td>
</tr>
</tbody>
</table></table-wrap>
<table-wrap position="float" id="T4">
<label>TABLE 4</label>
<caption><p>Notations used in the partition algorithm.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">n</td>
<td valign="top" align="left">Number of partitions in data set D.</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">(n1, n2,&#x2026;., nn)</td>
<td valign="top" align="left">array of size n containing the number of items in each partition; n k is the number of items in the kth partition</td>
</tr>
<tr>
<td valign="top" align="left">D<sub><italic>r</italic></sub></td>
<td valign="top" align="left">1. rth partition of the dataset.</td>
</tr>
<tr>
<td valign="top" align="left"><inline-formula><mml:math id="INEQ28"><mml:mrow><mml:msubsup><mml:mi>C</mml:mi><mml:mi>k</mml:mi><mml:mi>G</mml:mi></mml:msubsup></mml:mrow></mml:math></inline-formula></td>
<td valign="top" align="left">Set of global candidate gradual k-itemsets (potential frequent gradual itemsets).</td>
</tr>
<tr>
<td valign="top" align="left">mapC<italic><sup>G</sup></italic></td>
<td valign="top" align="left">Set of global candidate gradual itemsets (potential frequent itemsets)</td>
</tr>
<tr>
<td valign="top" align="left">mapF<sub><italic>r</italic></sub></td>
<td valign="top" align="left">Set of frequent gradual itemsets in partition D<sub><italic>r</italic></sub></td>
</tr>
<tr>
<td valign="top" align="left"><inline-formula><mml:math id="INEQ29"><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>p</mml:mi><mml:msubsup><mml:mi>F</mml:mi><mml:mi>k</mml:mi><mml:mi>r</mml:mi></mml:msubsup></mml:mrow></mml:math></inline-formula></td>
<td valign="top" align="left">Sets of gradual k-itemsets ordered (see Def 16) in the partition D<sub><italic>r</italic></sub></td>
</tr>
<tr>
<td valign="top" align="left">mapF<italic><sup>G</sup></italic></td>
<td valign="top" align="left">Set of global frequent itemsets (frequent itemsets).</td>
</tr>
<tr>
<td valign="top" align="left"><inline-formula><mml:math id="INEQ30"><mml:mrow><mml:mi>I</mml:mi><mml:msubsup><mml:mi>F</mml:mi><mml:mi>k</mml:mi><mml:mi>G</mml:mi></mml:msubsup></mml:mrow></mml:math></inline-formula></td>
<td valign="top" align="left">Set of global infrequent gradual k-itemsets, i.e. for all partitions.</td>
</tr>
<tr>
<td valign="top" align="left">mapIF<sub><italic>r</italic></sub></td>
<td valign="top" align="left">Set of infrequent gradual itemsets in the partition D<sub><italic>r</italic></sub>.</td>
</tr>
<tr>
<td valign="top" align="left"><inline-formula><mml:math id="INEQ31"><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>p</mml:mi><mml:mi>I</mml:mi><mml:msubsup><mml:mi>F</mml:mi><mml:mi>k</mml:mi><mml:mi>r</mml:mi></mml:msubsup></mml:mrow></mml:math></inline-formula></td>
<td valign="top" align="left">Sets of ordered infrequent gradual k-itemsets(see Def 16) in the partition D<sub><italic>r</italic></sub>.</td>
</tr>
<tr>
<td valign="top" align="left">mapIF<italic><sup>G</sup></italic></td>
<td valign="top" align="left">Set of global infrequent itemsets, i.e. for all partitions (infrequent itemsets)</td>
</tr>
</tbody>
</table></table-wrap>
<table-wrap position="float" id="T6">
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<tbody>
<tr>
<td valign="top" align="left">frequent gradual patterns of initial fusion<hr/></td>
</tr>
<tr>
<td valign="top" align="left"><bold>level 2</bold></td>
</tr>
<tr>
<td valign="top" align="left">A <sup>&#x003E;</sup>S <sup>&#x003C;</sup> SG: 25.0%; A <sup>&#x003E;</sup>S <sup>&#x003C;</sup> SG: 75.0%</td>
</tr>
<tr>
<td valign="top" align="left"><bold>level 1</bold></td>
</tr>
<tr>
<td valign="top" align="left">A <sup>&#x003C;</sup>SG:100%;A <sup>&#x003E;</sup>SG:100%;S <sup>&#x003C;</sup>SG:75%;S <sup>&#x003E;</sup>SG: 75%; V &#x003C; SG: 37.5%; V &#x003E; SG: 37.5%</td>
</tr>
</tbody>
</table></table-wrap>
<p>Example 23. Final step: Set of frequent gradual patterns of all partitions.</p>
<table-wrap position="float" id="T7">
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<tbody>
<tr>
<td valign="top" align="left">Frequent gradual patterns<hr/></td>
</tr>
<tr>
<td valign="top" align="left"><bold>level 3</bold></td>
</tr>
<tr>
<td valign="top" align="left">A <sup>&#x003E;</sup>S <sup>&#x003E;</sup>V <sup>&#x003E;</sup> SG: 37.5.0%</td>
</tr>
<tr>
<td valign="top" align="left"><bold>level 2</bold></td>
</tr>
<tr>
<td valign="top" align="left">A <sup>&#x003C;</sup>S <sup>&#x003C;</sup>SG:25.0%;A <sup>&#x003E;</sup>S <sup>&#x003E;</sup>SG:75.0%;A <sup>&#x003E;</sup>V <sup>&#x003E;</sup>SG: 25.0%; A <sup>&#x003E;</sup>V <sup>&#x003E;</sup> SG: 37.5%; S <sup>&#x003E;</sup>V <sup>&#x003E;</sup> SG: 37.5%</td>
</tr>
<tr>
<td valign="top" align="left"><bold>level 1</bold></td>
</tr>
<tr>
<td valign="top" align="left">A <sup>&#x003C;</sup> SG: 100%; A <sup>&#x003E;</sup> SG: 100%; S <sup>&#x003C;</sup> SG: 75%; S <sup>&#x003E;</sup> SG: 75%; V <sup>&#x003C;</sup> SG: 37.5%; V <sup>&#x003E;</sup> SG: 37.5%</td>
</tr>
</tbody>
</table></table-wrap>
<p>In this phase, we can see that in the gradual 2-patterns those in red color are generated by new patterns formed from items of the 2 partitions.</p>
<p>Example 24. Collection of frequent gradual patterns of Sgrite method.</p>
<table-wrap position="float" id="T8">
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<tbody>
<tr>
<td valign="top" align="left">Frequent gradual patterns of Sgrite<hr/></td>
</tr>
<tr>
<td valign="top" align="left"><bold>level 3</bold></td>
</tr>
<tr>
<td valign="top" align="left">A <sup>&#x003E;</sup>S <sup>&#x003E;</sup>V <sup>&#x003E;</sup> SG: 37.5.0%</td>
</tr>
<tr>
<td valign="top" align="left"><bold>level 2</bold></td>
</tr>
<tr>
<td valign="top" align="left">A <sup>&#x003E;</sup>S <sup>&#x003C;</sup>SG:25.0%;A <sup>&#x003E;</sup>S <sup>&#x003E;</sup>SG:75.0%;A <sup>&#x003E;</sup>V <sup>&#x003C;</sup>SG: 25.0%; A <sup>&#x003E;</sup>V <sup>&#x003E;</sup> SG: 37.5%; S <sup>&#x003E;</sup>V <sup>&#x003E;</sup> SG: 37.5%</td>
</tr>
<tr>
<td valign="top" align="left"><bold>level 1</bold></td>
</tr>
<tr>
<td valign="top" align="left">A <sup>&#x003C;</sup> SG: 100%; A <sup>&#x003E;</sup> SG: 100%; S <sup>&#x003C;</sup> SG: 75%; S <sup>&#x003E;</sup> SG: 75%; V <sup>&#x003C;</sup> SG: 37.5%; V <sup>&#x003E;</sup> SG: 37.5%</td>
</tr>
</tbody>
</table></table-wrap>
<p>In the abovementioned notations, the sets <italic>mapC<sup>G</sup></italic> and <italic>IF<sub><italic>k</italic></sub><italic><sup>G</sup></italic></italic> all have the two fields: gradual pattern and gradual support(SG), for each of the elements belonging to these sets. The sets <italic>mapFr, mapF<italic><sup>G</sup></italic>, mapC<italic><sup>G</sup></italic>, mapIF<sub><italic>r</italic></sub>, and mapI F<italic><sup>G</sup></italic></italic> are maps or association tables where the keys are sizes of gradual patterns of each level and the values are a set of k-gradual patterns ordered by the level k considered (i.e., of key k).</p>
<p>Remark: all sets of gradual itemsets contains the positive ordered gradual itemsets.</p>
<list list-type="simple">
<list-item>
<label>&#x2022;</label>
<p>The Partition-Gen-Sgrite (Dr, minSupport) algorithm uses the SGrite principle on D<sub><italic>r</italic></sub> and returns the set mapF<sub><italic>r</italic></sub> local gradual frequent patterns, i.e., gradual patterns ordered according to the definition 20 which are frequent in the partition D<sub><italic>r</italic></sub> range by frequency size level. It updates, during the traversal of each level of the lattice with at least one positive term, the infrequent gradual patterns of the abovementioned level in the set <italic>mapIF<sub><italic>k</italic></sub><italic><sup>r</sup></italic></italic> from <italic>mapIF<sup>G</sup></italic>.</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p>The procedure <bold>genCandidateFreqUnionTwoPartition Consecutive</bold> (F<italic><sub><italic>p</italic></sub></italic><sub>1</sub>, F<italic><sub><italic>p</italic></sub></italic><sub>2</sub>, <italic>mapF<sup>G</sup></italic>, and <italic>mapIF<sup>G</sup></italic>) allows to generate and update the set of global frequent and infrequent candidates obtained from the sets F<sub><italic>p1</italic></sub> and F<sub><italic>p2</italic></sub> here partitions being processed.</p>
</list-item>
</list>
<p>In algorithm 2, the data structure <italic>resultatR</italic> has five fields: type Map which represents the indicator on the set choose between <italic>mapF<sup>G</sup></italic> and <italic>mapIF<sup>G</sup></italic>, where we will find the gradual item index value of the suffix of the level pattern prefix, <italic>Prefix</italic><sub><italic>level</italic></sub> maximal; min1 and max1 are the index bounds of over-patterns of <italic>Prefix</italic><sub><italic>level</italic></sub> in <italic>mapF<sub><italic>level</italic></sub><italic><sup>G</sup></italic><sub>+1</sub></italic>; min2 and max2 are the index bounds of the over-patterns of Prefix<sub><italic>level</italic></sub> in <italic>mapIF<italic><sup>G</sup></italic><sub><italic>level+1</italic></sub></italic>. The construction of <italic>resultatR</italic> is carried out using the function <bold>byPrefixFindPositionsMinMax</bold> of algorithm 2. The function <bold>matrixAdjacency</bold> determines the adjacency matrix of the gradual pattern taken as a parameter. The <bold>genCandidatOfALevel</bold> function generates candidate patterns of size level + 1, following the principle described in step 3-(e) of Section &#x201C;Partition Working Principle.&#x201D; In this algorithm, on line 1, the <bold>productCartesian</bold> function generates a set of patterns resulting from the Cartesian product of the two sets taken as a parameter; here, [get(<italic>F<sub><italic>k</italic></sub><italic><sup>p</sup></italic><sub><italic>i</italic></sub><italic><sup>i</sup></italic></italic>)] [resp. get(<italic>F<sub><italic>k</italic></sub><italic><sup>p</sup></italic><sub><italic>i</italic></sub><italic><sup>i</sup></italic></italic>)] represents the ordered set of the gradual patterns of the level ki of mapFpi(resp. the set of gradual patterns complementary to each pattern of level k<sub><italic>i</italic></sub> of mapIF<sub><italic>pi</italic></sub>).</p>
<p>The function filterSetByInfrequentSetAndSupportCompute (candidateFusion) allows: (1) to prune first the candidates of its set candidateFusion in parameter, which are supermotif of a inferred pattern of mapIF<italic><sup>G</sup></italic>. If the candidate to prune is C<sub><italic>k</italic></sub> = {A<sub><italic>i</italic></sub>}<italic><sub><italic>i</italic></sub></italic> <sub>=</sub> <sub><italic>i&#x2026;k</italic></sub> then it generates two new candidates of size k &#x2212;1, C1<sub><italic>k</italic></sub> <sub>&#x2013;1</sub> = {<italic>A</italic><sub><italic>i</italic></sub> }<italic><sub><italic>i</italic></sub></italic><sub> = 1&#x2026;..<italic>k</italic></sub> <sub>&#x2013;1</sub>, C2<sub><italic>k</italic>&#x2013;1</sub> = {<italic>A</italic><sub><italic>i</italic></sub>}<italic><sub><italic>i</italic></sub></italic> <sub>=</sub> <sub>1&#x2026;k&#x2013;2</sub> U <italic>A</italic><sub><italic>k</italic></sub> which are added to the potential candidate list. On the other hand if C k is frequent then, we add C<sub><italic>k</italic></sub> at level k of mapF<italic><sup>G</sup></italic> and its two frequent sub-patterns C1<sub><italic>k</italic>&#x2013;1</sub>, C2<sub><italic>k</italic>&#x2013;1</sub> build exactly as above at level k&#x2212;1 of ma pF<italic><sup>G</sup></italic>. Once candidateFusion = 0 we have completed the process (1) and we have a valid candidate set list. Second, in (2) we have to perform another filtering by support calculation. Here, for any candidate k-pattern C<italic><sub><italic>k</italic></sub></italic> = {<italic>Ai}<sub><italic>i</italic></sub> <sub>=</sub> <sub>1&#x2026;<italic>k</italic></sub></italic> of <italic>list, C<sub><italic>k</italic></sub> &#x2208; list:</italic> if <italic>C</italic><sub><italic>k</italic></sub> is frequent then we add <italic>C</italic><sub><italic>k</italic></sub> at level k of <italic>mapF<sup>G</sup></italic>, and <italic>C</italic><sub>1</sub><italic><sub><italic>k</italic></sub></italic><sub>&#x2013;1</sub>, <italic>C</italic><sub>2</sub><italic><sub><italic>k</italic></sub></italic><sub>&#x2013;1</sub> at level k<sub>&#x2013;</sub>1 of <italic>mapF<sup>G</sup></italic>, otherwise delete C k of list and add at the end of the list C<italic><sub>1 k</sub></italic><sub>&#x2013;</sub><italic><sub>1</sub></italic>, C<italic><sub>2 k</sub></italic><sub>&#x2013;</sub><italic><sub>1</sub></italic> as a new candidate in the list. We repeat this process until <italic>list</italic> = 0.</p>
<p>Note: In each of the algorithms below, before calculating the support of a k-pattern, we check if it does not belong to one or the other sets <italic>ma pF<sub><italic>k</italic></sub><italic><sup>G</sup></italic></italic> or <italic>mapIF<sub><italic>k</italic></sub><italic><sup>G</sup></italic></italic>, because indeed the k-pattern may well have been determined during the ascending scan of the search space of the trellis to the first positive term or during the descending scan. The interest is to reduce the number of support computations to the maximum which is a greedy operation.</p>
</sec>
<sec id="S4.SS4">
<title>Principle of finding maximum gradual patterns</title>
<p>The method we use is based on SGrite, which itself is an optimized method of Grite in terms of CPU time for extracting gradual patterns. Indeed, the MPSgrite method that we developed in this article has two objectives to achieve. The first objective is to optimize the time for extracting the gradual patterns considered, and the second goal is to reduce the number of gradual patterns extracted. Indeed, in real life, experts in the field say that the fewer patterns extracted, the easier it is to interpret and make decisions. We first opt for an approach of dual traversal of the lattice space to the first positive term from levels 1 to n by SGrite and simultaneously from levels n to 1. The first problem of this combined approach is the generation of the candidates of the maximal set which is of the order of 2<italic><sup>n</sup></italic><sup>&#x2013;1</sup>, with n the number of gradual items of the database. Consequently, the generation of candidates will have a higher CPU time cost. In addition, we also note that:</p>
<p>Lemma 25. The greater the number of maximum initial candidates, the higher the determination of the following maximal candidate sets, as well as the operation of fusion of n adjacency matrix composing the n-candidate gradual motifs considered.</p>
<p>Thus, to keep an optimal method of extracting the gradual patterns of the abovementioned maxima, it will be a question of opting for a hybridization method. The base will be based on the choice to be made according to the parameter n number of gradual items. Dataset D of n items, t transactions, and a fixed value p are considered, representing which method to use. Under these conditions, if n is less than or equal to p, then MPSGrite uses the two-way browse method sense, that is, simultaneously ascending and descending from the lattice to a positive term. On the contrary, if n is strictly greater than p, then a bottom- up lattice traversal approach is used which first efficiently generates the lattice of frequent gradual patterns, and then conversely in the descent of the lattice, by &#x201C;backtracking,&#x201D; we extract the frequent and maximal gradual patterns.</p>
</sec>
<sec id="S4.SS5">
<title>Presentation of the components of the hybrid method</title>
<p>The search space is limited in each of the components below to the lattice with a positive term. Two components of the hybrid method are required, namely, component 1 and component 2. Component 1 takes place following the path in two simultaneously ascending and descending directions of the positive lattice, and its description is carried out in section &#x201C;Presentation of the Hybrid Extraction Method for Maximal Gradual Patterns.&#x201D;</p>
</sec>
<sec id="S4.SS6">
<title>Component 2: Ascending the positive lattice</title>
<p>This algorithmic component proceeds in two main steps: In step 1, it is a question of extracting the frequent gradual patterns performed by SGrite. Once this first step is completed its result will be an entry for step 2.</p>
<p>In step 2, we generate the maximum gradual patterns from the frequencies of step 1. During the generation, we must respect the notion of lexicographic order of Apriori, Grite, and SGrite. Let m = n is the size of frequent gradual patterns of maximum cardinality.</p>
<table-wrap position="float" id="A1">
<label>Algorithm 1</label>
<caption><p>genCandidateFreqUnionTwoPartition Consecutive.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<tbody>
<tr>
<td valign="top" align="left"><monospace><bold>begin:</bold></monospace><break/> <monospace><bold>Require:</bold> <italic>F<sub><italic>p</italic>1</sub>; F<sub><italic>p</italic>2</sub>; mapF<italic><sup>G</sup></italic>; mapIF<italic><sup>G</sup></italic></italic>;</monospace><break/> <monospace><bold>Ensure:</bold> <italic>ma pF<italic><sup>G</sup></italic>; mapIF<italic><sup>G</sup></italic></italic>;</monospace><break/> <monospace>{fusion of the frequent patterns of the</monospace><break/> <monospace>highest level of the 2 partitions of</monospace><break/> <monospace>level k<sub>1</sub> and k<sub>2</sub>}</monospace><break/> <monospace>1: <italic>candidateFusion</italic>&#x2190;<italic>productCartesian</italic></monospace><break/> <monospace><inline-formula><mml:math id="INEQ25"><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>g</mml:mi><mml:mi>e</mml:mi><mml:mi>t</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>F</mml:mi><mml:msubsup><mml:mi>p</mml:mi><mml:mn>1</mml:mn><mml:msub><mml:mi>k</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:msubsup><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math id="INEQ26"><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>g</mml:mi><mml:mi>e</mml:mi><mml:mi>t</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>F</mml:mi><mml:msubsup><mml:mi>p</mml:mi><mml:mn>2</mml:mn><mml:msub><mml:mi>k</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:msubsup><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> U <italic><inline-formula><mml:math id="INEQ27"><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>g</mml:mi><mml:mi>e</mml:mi><mml:mi>t</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>F</mml:mi><mml:msubsup><mml:mi>p</mml:mi><mml:mn>2</mml:mn><mml:msub><mml:mi>k</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:msubsup><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>)</italic>; {Filtering</monospace><break/> <monospace>of candidates: pruning of over-patterns</monospace><break/> <monospace>of infrequents and infrequents</monospace><break/> <monospace>determined by calculation of the</monospace><break/> <monospace>support}</monospace><break/> <monospace>2: <bold>while</bold> <italic>candidateFusion</italic> &#x2260; &#x00D8; <bold>do</bold></monospace><break/> <monospace>3: <italic>candidateFusion</italic> = filterSetByInfre</monospace><break/> <monospace>quentSetAndSupportCompute</monospace><break/> <monospace>(<italic>candidateFusion</italic>); 4: <bold>end while</bold></monospace><break/> <monospace>{initial reference level for the lattice</monospace><break/> <monospace>path};</monospace><break/> <monospace>5: <italic>level</italic> = 1;<italic>nextLevel</italic> = 2;<italic>k</italic> = <italic>k</italic><sub>1</sub>+<italic>k</italic><sub>2</sub>;</monospace><break/> <monospace>6: refList&#x2190;<italic>mapF<sup>G</sup></italic></monospace><sub><italic>level</italic></sub><monospace>;</monospace><break/> <monospace>7: <bold>while</bold> <italic>level</italic> &#x2264; <italic>length</italic>(<italic>mapF<sup>G</sup></italic>)&#x2212;1 and</monospace><break/> <monospace><italic>nextLevel</italic> &#x2264; <italic>length</italic>(<italic>mapF<sup>G</sup></italic>) <bold>do</bold></monospace><break/> <monospace>8: <bold>for</bold> <italic>j</italic> = 1 <bold>to</bold> <italic>length(refList)</italic> <bold>do</bold></monospace><break/> <monospace>9: <italic>Prefix</italic></monospace><sub><italic>level</italic></sub> <monospace><italic>&#x2190; get(j, refList);</italic></monospace><break/> <monospace>10: <italic>resuitatR</italic> = byPrefixFindPositionsMin</monospace><break/> <monospace>Max(<italic>mapF<sup>G</sup></italic>, <italic>mapIF<sup>G</sup></italic></monospace><break/> <monospace><italic>Pre fix</italic></monospace><sub><italic>level</italic></sub><monospace>, <italic>nextLevel</italic>);</monospace><break/> <monospace>11: <italic>adjPrefix</italic> = matrixAdjacency(<italic>Prefix</italic></monospace><sub><italic>level</italic></sub><monospace>);</monospace><break/> <monospace>12: genCandidatOfALevel(<italic>mapF<sup>G</sup></italic>, <italic>mapIF<sup>G</sup></italic>,</monospace><break/> <monospace><italic>resultatR</italic>, <italic>adjPrefix</italic>, <italic>Prefix</italic><sub><italic>level</italic></sub>,</monospace><break/> <monospace><italic>nextLevel</italic>); <italic>adjPrefix</italic>, <italic>Prefix</italic><sub><italic>level</italic></sub>,</monospace><break/> <monospace><italic>nextLevel</italic>)</monospace><break/> <monospace>13: <bold>end for</bold></monospace><break/> <monospace>14: <italic>level&#x2190; level</italic> + 1;</monospace><break/> <monospace>15: <italic>nextLevel nextLevel</italic> + 1;</monospace><break/> <monospace>16: <italic>refList = &#x2190; mapF<sup>G</sup></italic></monospace><sub><italic>level</italic></sub><monospace>;</monospace><break/> <monospace>17:</monospace> <italic><sup>if</sup></italic> <italic><italic><sup>not</sup></italic></italic><monospace>&#x2203;<italic>map</italic><bold><italic><italic><sup>F</sup></italic></italic></bold><italic><italic><sup>G</sup></italic>ex<sub><italic>t</italic></sub>Level</italic></monospace> <italic><sup>then</sup></italic><break/> <monospace>18: putToMap(<italic>mapF<sup>G</sup></italic>, nextLevel,&#x2205;);</monospace><break/> <monospace>19: <bold>end if</bold></monospace><break/> <monospace>20: <bold>end while</bold></monospace><break/> <monospace>21: <bold>return</bold> <italic>(mapF<italic><sup>G</sup></italic></italic>, <italic>mapIF<sup>G</sup></italic>);</monospace></td>
</tr>
</tbody>
</table>
</table-wrap>
<p>In this case, the set of the so-called frequent maximum is initialized by all the frequent gradual m-patterns. Then, one proceeds iteratively to prune the level k&#x2212;1 of all the subpatterns which allowed the construction of the maximum gradual k-patterns previously determined and purified at iteration k. This process continues in this way until the current processing value of k is 1. In fact, when k = 1, the maximum 1-gradual patterns are determined. This completes the &#x201C;backtracking&#x201D; determination of the maximum step patterns.</p>
</sec>
</sec>
<sec id="S5">
<title>Experimentation</title>
<p>This section experimentally compares the performance of SGRrite and the novel hybrid approach MPSGrite. We used three sets of data. The first two are test data called F20Att100Li having 20 attributes and 100 transactions and F20Att500Li having 20 attributes and 500 transactions (<xref ref-type="bibr" rid="B5">5</xref>). The last set of data is made up of meteorological dataset. For further experimentation, we have added five other datasets: a test dataset that is C250-A100-50 and 4 other real ones that are Life Expectancy developed, Life Expectancy-developing, wine quality-red, and fundamental.</p>
<sec id="S5.SS1">
<title>Description of the datasets</title>
<p>This part presents the data used for the experiments carried out in this work.</p>
<p>We used a practical database called the weather forecast downloaded from the site <ext-link ext-link-type="uri" xlink:href="http://www.meteo-paris.com/ile-de-france/station-meteo-paris/pro/">http://www.meteo-paris.com/ile-de-france/station-meteo-paris/pro/</ext-link>. The dataset comprises 516 practical data recorded over two days (July 22&#x2013;23, 2017), which are defined by 26 real numbers including temperature, cumulative rain (mm), humidity (percent), pressure (hPa), wind velocity (km/h), wind perceived temperature, or wind distance traveled (km) (<xref ref-type="bibr" rid="B5">5</xref>).</p>
<p>The C250-A100-50 dataset is taken from the site <ext-link ext-link-type="uri" xlink:href="https://github.com/bnegreve/paraminer/tree/master/data/gri">https://github.com/bnegreve/paraminer/tree/master/data/gri</ext-link>. For the reason of memory space, we have reduced the initial number of items from 100 to 12, because, otherwise, the extraction is not possible on our computer.</p>
<p>Winequality-red dataset is taken from the site <ext-link ext-link-type="uri" xlink:href="https://archive.ics.uci.edu/ml/datasets/wine+quality">https://archive.ics.uci.edu/ml/datasets/wine+quality</ext-link>. It is the Wine Quality dataset related to red vinho verde wine samples, from the north of Portugal. The goal is to model wine quality based on physicochemical tests. The dataset&#x2019;s attributes make use of the following items: volatile acidity, citric acid, fixed acidity, residual sugar, free sulfur dioxide, total sulfur dioxide, density, pH, sulfates, alcohol, and quality (based on sensory data) (scores between 0 and 10).</p>
<p>These two datasets LifeExpectancydevelopped.csv and LifeExpectancydevelopping.csv (<xref ref-type="bibr" rid="B5">5</xref>) are also the real datasets taken for the site <ext-link ext-link-type="uri" xlink:href="https://www.kaggle.com/kumarajarshi/life-expectancy-">https://www.kaggle.com/kumarajarshi/life-expectancy-</ext-link> who that are open-access data. The data were as collected from the World Health Organization (WHO) and the United Nations website with the help of Deeksha Russell and Duan Wang. For this life expectancy dataset, attributes 1 and 3 are removed and the rest are used (<xref ref-type="bibr" rid="B5">5</xref>). The dataset is designed to answer some key questions such as: do the different predictors I initially select actually affect life expectancy? Should countries with low life expectancy (under 65) increase health spending to increase life expectancy? Is life expectancy related to diet, lifestyle, exercise, smoking, alcohol etc.? Is there a positive or negative relationship between life expectancy and alcohol consumption? Do densely populated countries have a lower life expectancy? How does immunization coverage affect life expectancy? The final merged file (final dataset) consists of 22 columns and 2,938 rows or 20 predictors. All prognostic variables are immunization factors, mortality factors, economic factors, and social factors. Due to the size of the original dataset, we split the data into two groups: LifeExpectancydevelopping.csv for developed countries and LifeExpectancydevelopping.csv for developing countries, where we removed transactions with values empty.</p>
<p>The fundamentals.csv dataset contains metrics extracted from annual SEC 10K fillings (2012&#x2013;2016), which should be enough to derive most of the popular fundamental indicators. The fundamentals.csv comes from Nasdaq Financials. For this dataset, we have at the beginning 77 attributes. After preprocessing which consisted of removing empty- valued transactions, we derived a dataset with 1,299 transactions and 74 attributes. The removed attributes are the first four: stock symbol, end of period, accounts payable, and accounts receivable, for more information, see <ext-link ext-link-type="uri" xlink:href="https://www.kaggle.com/dgawlik/nyse?select=fundamentals.csv">https://www.kaggle.com/dgawlik/nyse?select=fundamentals.csv</ext-link>; for reasons related to the characteristics of our small memory computer, we extracted part of the fundamental.csv dataset for the experiments, which gave us a dataset of 300 transactions and 35 attributes. Transactions are the top 300 and attributes are the top 35 (<xref ref-type="bibr" rid="B5">5</xref>).</p>
</sec>
<sec id="S5.SS2">
<title>Evaluation of algorithms</title>
<p>All tests on the datasets presented in the preceding part were performed on an Intel Core T M i7-2630QM CPU running on 2.00 GHz &#x00D7; 8 with 8 GB main memory and Ubuntu 16.04 LTS. We investigated a number of support levels for each dataset and measured the associated execution times (shown in <xref ref-type="fig" rid="F3">Figures 3</xref>, <xref ref-type="fig" rid="F5">5</xref>, <xref ref-type="fig" rid="F8">8</xref>, <xref ref-type="fig" rid="F10">10</xref>, <xref ref-type="fig" rid="F12">12</xref>, <xref ref-type="fig" rid="F14">14</xref>, <xref ref-type="fig" rid="F16">16</xref>, <xref ref-type="fig" rid="F18">18</xref>), as well as the number of retrieved patterns (shown in <xref ref-type="fig" rid="F4">Figures 4</xref>, <xref ref-type="fig" rid="F6">6</xref>, <xref ref-type="fig" rid="F9">9</xref>, <xref ref-type="fig" rid="F11">11</xref>, <xref ref-type="fig" rid="F13">13</xref>, <xref ref-type="fig" rid="F15">15</xref>, <xref ref-type="fig" rid="F17">17</xref>, <xref ref-type="fig" rid="F19">19</xref>). In these figures, (N It. X M Tr.) represents the number of items (N) and transactions (M) inside the dataset.</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption><p>Different CPU times [Tr. (Resp. It.) denotes transactions (resp. Items)] dataset C250-A100-50, 251 Tr. et 12 It.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijcs-2022-03-g003.tif"/>
</fig>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption><p>Experimentation on dataset C250-A100-50 for the number of patterns gradual extracted. <bold>(A)</bold> Experimentation 1 data set C250-A100-50. <bold>(B)</bold> Experimentation 2 data set C250-A100-50.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijcs-2022-03-g004.tif"/>
</fig>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption><p>Different CPU times [Tr. (Resp. It.) denotes transactions (resp. Items)] for life expectancy. <bold>(A)</bold> Data set life expectancy developed, 245 Tr. et 20 It. <bold>(B)</bold> Data set life expectancy developing, 1407 Tr. et 20 It.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijcs-2022-03-g005.tif"/>
</fig>
<fig id="F6" position="float">
<label>FIGURE 6</label>
<caption><p>Exp. 1 Data set life expectancy developing. <bold>(A)</bold> Exp. 1 data set life expectancy developed. <bold>(B)</bold> Exp. 1 data set life expectancy developing.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijcs-2022-03-g006.tif"/>
</fig>
<fig id="F7" position="float">
<label>FIGURE 7</label>
<caption><p>Exp. 2 Data set life expectancy. <bold>(A)</bold> Exp. 2 data set life expectancy developed. <bold>(B)</bold> Exp. 2 data set life expectancy developing.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijcs-2022-03-g007.tif"/>
</fig>
<fig id="F8" position="float">
<label>FIGURE 8</label>
<caption><p>Different CPU times [Tr. (Resp. It.) denotes transactions (resp. Items)] forF20Att500Li 500 Tr. et 20 It.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijcs-2022-03-g008.tif"/>
</fig>
<fig id="F9" position="float">
<label>FIGURE 9</label>
<caption><p>Experimentation of data set F20Att500Li on number gradual patterns extracted. <bold>(A)</bold> Exp. 1 data set F20Att500Li. <bold>(B)</bold> Exp. 2 data set F20Att500Li.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijcs-2022-03-g009.tif"/>
</fig>
<fig id="F10" position="float">
<label>FIGURE 10</label>
<caption><p>Different CPU times [Tr. (Resp. It.) denotes transactions (resp. Items)] forF20Att200Li 200 Tr. et 20 It.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijcs-2022-03-g010.tif"/>
</fig>
<fig id="F11" position="float">
<label>FIGURE 11</label>
<caption><p>Experimentation of data set F20Att200Li on number gradual patterns extracted. <bold>(A)</bold> Exp. 1 data set F20Att200Li. (B) Exp. 2 data set F20Att200Li.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijcs-2022-03-g011.tif"/>
</fig>
<fig id="F12" position="float">
<label>FIGURE 12</label>
<caption><p>Different CPU times [Tr. (Resp. It.) denotes transactions (resp. Items)] forF30Att100Li 100 Tr. et 30 It.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijcs-2022-03-g012.tif"/>
</fig>
<fig id="F13" position="float">
<label>FIGURE 13</label>
<caption><p>Experimentation of data set F30Att100Li on number gradual patterns extracted. <bold>(A)</bold> Exp. 1 data set F30Att100Li. (B) Exp. 2 data set F30Att100Li.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijcs-2022-03-g013.tif"/>
</fig>
<fig id="F14" position="float">
<label>FIGURE 14</label>
<caption><p>Different CPU times [Tr. (Resp. It.) denotes transactions (resp. Items)] data set fundamental, 300 Tr. et 35 It.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijcs-2022-03-g014.tif"/>
</fig>
<fig id="F15" position="float">
<label>FIGURE 15</label>
<caption><p>Experimentation of data set fundamental on number gradual patterns extracted. <bold>(A)</bold> Exp. 1 data set fundamental. <bold>(B)</bold> Exp. 2 data set fundamental.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijcs-2022-03-g015.tif"/>
</fig>
<fig id="F16" position="float">
<label>FIGURE 16</label>
<caption><p>Comparison of execution times on meteorological data made up of 516 trans-actions and 26 items.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijcs-2022-03-g016.tif"/>
</fig>
<fig id="F17" position="float">
<label>FIGURE 17</label>
<caption><p>Experimentation of data set meteorological on number gradual patterns extracted. <bold>(A)</bold> Exp. 1 data set meteorological. <bold>(B)</bold> Exp. 2 data set meteorological.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijcs-2022-03-g017.tif"/>
</fig>
<fig id="F18" position="float">
<label>FIGURE 18</label>
<caption><p>Comparison of execution times on test data made up of 100 tr. and 10 it.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijcs-2022-03-g018.tif"/>
</fig>
<fig id="F19" position="float">
<label>FIGURE 19</label>
<caption><p>Experimentation of data set test on number gradual patterns extracted. <bold>(A)</bold> Exp. 1 data set test. <bold>(B)</bold> Exp. 2 data set test.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijcs-2022-03-g019.tif"/>
</fig>
</sec>
</sec>
<sec id="S6" sec-type="conclusion">
<title>Conclusion</title>
<p>In this research, we describe a method for improving the efficiency of algorithms for detecting frequent and maximum gradual patterns by halving both the search space and the burden of the calculation of gradual supports on big datasets. Experiments on many types of well-known datasets indicate the efficacy of the suggested technique. In the future study, we will analyze larger datasets and investigate the possibility of distributed processing.</p>
</sec>
</body>
<back>
<fn-group>
<fn id="footnote1">
<label>1</label>
<p>OnLine Transactional Processing.</p></fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="B1"><label>1.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oudni</surname> <given-names>A.</given-names></name></person-group> <source><italic>Fouille de donnees par extraction de motifs graduels: contextualisation et enrichissement.</italic></source> <comment>Ph.D. thesis</comment>. <publisher-loc>Paris</publisher-loc>: <publisher-name>Universite Pierre et Marie Curie</publisher-name> (<year>2014</year>).</citation></ref>
<ref id="B2"><label>2.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Aggarwal</surname> <given-names>CC.</given-names></name></person-group> <source><italic>Data mining: the textbook.</italic></source> <publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer</publisher-name> (<year>2015</year>). <pub-id pub-id-type="doi">10.1007/978-3-319-14142-8</pub-id></citation></ref>
<ref id="B3"><label>3.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ayouni</surname> <given-names>S.</given-names></name></person-group> <source><italic>Etude et extraction de regles graduelles floues: definition d&#x2019;algorithmes efficaces.</italic></source> <comment>Ph.D. thesis</comment>. <publisher-loc>Montpellier</publisher-loc>: <publisher-name>Universite Montpellier</publisher-name> (<year>2012</year>).</citation></ref>
<ref id="B4"><label>4.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Negrevergne</surname> <given-names>B</given-names></name> <name><surname>Termier</surname> <given-names>A</given-names></name> <name><surname>Rousset</surname> <given-names>M</given-names></name> <name><surname>Mehaut</surname> <given-names>J</given-names></name></person-group>. <article-title>Para miner: a generic pattern mining algorithm for multi-core architectures.</article-title> <source><italic>Data Min Knowl Discov.</italic></source> (<year>2014</year>) <volume>28</volume>:<fpage>593</fpage>&#x2013;<lpage>633</lpage>. <pub-id pub-id-type="doi">10.1007/s10618-013-0313-2</pub-id></citation></ref>
<ref id="B5"><label>5.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Clementin</surname> <given-names>TD</given-names></name> <name><surname>Cabrel</surname> <given-names>TFL</given-names></name> <name><surname>Belise</surname> <given-names>KE</given-names></name></person-group>. <article-title>A novel algorithm for extracting frequent gradual patterns.</article-title> <source><italic>Mach Learn Appl.</italic></source> (<year>2021</year>) <volume>5</volume>:<issue>100068</issue>. <pub-id pub-id-type="doi">10.1016/j.mlwa.2021.100068</pub-id></citation></ref>
<ref id="B6"><label>6.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ngo</surname> <given-names>T</given-names></name> <name><surname>Georgescu</surname> <given-names>V</given-names></name> <name><surname>Laurent</surname> <given-names>A</given-names></name> <name><surname>Libourel</surname> <given-names>T</given-names></name> <name><surname>Mercier</surname> <given-names>G</given-names></name></person-group>. <article-title>Mining spatial gradual patterns: Application to measurement of potentially avoidable hospitalizations.</article-title> In: <person-group person-group-type="editor"><name><surname>Tjoa</surname> <given-names>AM</given-names></name> <name><surname>Bellatreche</surname> <given-names>L</given-names></name> <name><surname>Biffl</surname> <given-names>S</given-names></name> <name><surname>van Leeuwen</surname> <given-names>J</given-names></name> <name><surname>Wiedermann</surname> <given-names>J</given-names></name></person-group> <role>editors</role>. <source><italic>SOFSEM 2018: Theory and practice of computer science, volume 10706.</italic></source> <publisher-loc>Cham</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name> (<year>2018</year>). p. <fpage>596</fpage>&#x2013;<lpage>608</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-319-73117-9_42</pub-id></citation></ref>
<ref id="B7"><label>7.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Owuor</surname> <given-names>D</given-names></name> <name><surname>Laurent</surname> <given-names>A</given-names></name> <name><surname>Orero</surname> <given-names>J</given-names></name></person-group>. <article-title>Mining fuzzy-temporal gradual patterns.</article-title> In: <source><italic>Proceeding of the 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE).</italic></source> <publisher-loc>New Orleans, LA</publisher-loc>: <publisher-name>IEEE</publisher-name> (<year>2019</year>). p. <fpage>1</fpage>&#x2013;<lpage>6</lpage>. <pub-id pub-id-type="doi">10.1109/FUZZIEEE.2019.8858883</pub-id></citation></ref>
<ref id="B8"><label>8.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shah</surname> <given-names>F</given-names></name> <name><surname>Castelltort</surname> <given-names>A</given-names></name> <name><surname>Laurent</surname> <given-names>A</given-names></name></person-group>. <article-title>Handling missing values for mining gradual patterns from NoSQL graph databases.</article-title> <source><italic>Future Gene Comput Syst.</italic></source> (<year>2020</year>) <volume>111</volume>:<fpage>523</fpage>&#x2013;<lpage>38</lpage>. <pub-id pub-id-type="doi">10.1016/j.future.2019.10.004</pub-id></citation></ref>
<ref id="B9"><label>9.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Di Jorio</surname> <given-names>L.</given-names></name></person-group> <source><italic>Recherche de motifs graduels et application aux donnees medicales.</italic></source> <comment>Ph.D. thesis</comment>. <publisher-loc>Montpellier</publisher-loc>: <publisher-name>University of Montpellier</publisher-name> (<year>2010</year>).</citation></ref>
<ref id="B10"><label>10.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Laurent</surname> <given-names>A</given-names></name> <name><surname>Lesot</surname> <given-names>M-J</given-names></name> <name><surname>Rifqi</surname> <given-names>M</given-names></name></person-group>. <article-title>Extraction de motifs graduels par correlations d&#x2019;ordres induits.</article-title> In: <source><italic>Rencontres sur la Logique Floue et ses Applications, LFA&#x2019;2010.</italic></source> <publisher-loc>Lannion</publisher-loc> (<year>2010</year>).</citation></ref>
<ref id="B11"><label>11.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Di-Jorio</surname> <given-names>L</given-names></name> <name><surname>Laurent</surname> <given-names>A</given-names></name> <name><surname>Teisseire</surname> <given-names>M</given-names></name></person-group>. <article-title>Mining frequent gradual item sets from large databases.</article-title> In: <source><italic>Proceeding of the International Symposium on Intelligent Data Analysis.</italic></source> <publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer</publisher-name> (<year>2009</year>). p. <fpage>297</fpage>&#x2013;<lpage>308</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-642-03915-7_26</pub-id></citation></ref>
<ref id="B12"><label>12.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hullermeier</surname> <given-names>E</given-names></name></person-group>. <article-title>Association rules for expressing gradual dependencies.</article-title> In: <person-group person-group-type="editor"><name><surname>Elomaa</surname> <given-names>T</given-names></name> <name><surname>Mannila</surname> <given-names>H</given-names></name> <name><surname>Toivonen</surname> <given-names>H</given-names></name></person-group> <role>editors</role>. <source><italic>Principles of data mining and knowledge discovery, PKDD lecture notes in computer science.</italic></source> <publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer</publisher-name> (<year>2002</year>). p. <fpage>200</fpage>&#x2013;<lpage>11</lpage>. <pub-id pub-id-type="doi">10.1007/3-540-45681-3_17</pub-id></citation></ref>
<ref id="B13"><label>13.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Berzal</surname> <given-names>F</given-names></name> <name><surname>Cubero</surname> <given-names>JC</given-names></name> <name><surname>Sanchez</surname> <given-names>D</given-names></name> <name><surname>Miranda</surname> <given-names>MAV</given-names></name> <name><surname>Serrano</surname> <given-names>J</given-names></name></person-group>. <article-title>An alternative approach to discover gradual dependencies.</article-title> <source><italic>Int J Uncertain Fuzziness Knowl Based Syst.</italic></source> (<year>2007</year>) <volume>15</volume>:<fpage>559</fpage>&#x2013;<lpage>70</lpage>. <pub-id pub-id-type="doi">10.1142/S021848850700487X</pub-id></citation></ref>
<ref id="B14"><label>14.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Marsala</surname> <given-names>C</given-names></name> <name><surname>Laurent</surname> <given-names>A</given-names></name> <name><surname>Lesot</surname> <given-names>M-J</given-names></name> <name><surname>Rifqi</surname> <given-names>M</given-names></name> <name><surname>Castelltort</surname> <given-names>A</given-names></name></person-group>. <article-title>Discovering ordinal attributes through gradual patterns, morphological filters and rank discrimination measures.</article-title> In: <person-group person-group-type="editor"><name><surname>Ciucci</surname> <given-names>D</given-names></name> <name><surname>Pasi</surname> <given-names>G</given-names></name> <name><surname>Vantaggi</surname> <given-names>B</given-names></name></person-group> <role>editors</role>. <source><italic>Scalable uncertainty management, lecture notes in computer science.</italic></source> <publisher-loc>Cham</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name> (<year>2018</year>). p. <fpage>152</fpage>&#x2013;<lpage>63</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-030-00461-3_11</pub-id></citation></ref>
<ref id="B15"><label>15.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Aryadinata</surname> <given-names>YS</given-names></name> <name><surname>Lin</surname> <given-names>Y</given-names></name> <name><surname>Barcellos</surname> <given-names>C</given-names></name> <name><surname>Laurent</surname> <given-names>A</given-names></name> <name><surname>Libourel</surname> <given-names>T</given-names></name></person-group>. <article-title>Mining epidemiological dengue fever data from Brazil: a gradual pattern based geographical information system.</article-title> In: <person-group person-group-type="editor"><name><surname>Laurent</surname> <given-names>A</given-names></name> <name><surname>Strauss</surname> <given-names>O</given-names></name> <name><surname>Bouchon-Meunier</surname> <given-names>B</given-names></name> <name><surname>Yager</surname> <given-names>RR</given-names></name></person-group> <role>editors</role>. <source><italic>Information processing and management of uncertainty in knowledge-based systems, communications in computer and information science.</italic></source> <publisher-loc>Cham</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name> (<year>2014</year>). p. <fpage>414</fpage>&#x2013;<lpage>23</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-319-08855-6_42</pub-id></citation></ref>
<ref id="B16"><label>16.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Djamegni Clementin</surname> <given-names>T</given-names></name> <name><surname>Fotso Laurent</surname> <given-names>T</given-names></name> <name><surname>Cabrel</surname> <given-names>K</given-names></name> <name><surname>Belise</surname> <given-names>E</given-names></name></person-group>. <article-title>Un nouvel algorithme d&#x2019;extraction des motifs graduels appele Sgrite.</article-title> In: <source><italic>Proceeding of the CARI 2020 - Colloque Africain sur la Recherche en Informatique et en Mathemathiques Appliquees.</italic></source> <publisher-loc>Thies, SN</publisher-loc> (<year>2020</year>).</citation></ref>
</ref-list>
</back>
</article>
