<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="review-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Bohr. Iam.</journal-id>
<journal-title>BOHR International Journal of Internet of things, Artificial Intelligence and Machine Learning</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Bohr. Iam.</abbrev-journal-title>
<issn pub-type="epub">2583-5521</issn>
<publisher>
<publisher-name>BOHR</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.54646/bijiam.2024.25</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Review</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>A review of deep learning for facial aging</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Shuai</surname> <given-names>Xianya</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Chen</surname> <given-names>Yujiong</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Wu</surname> <given-names>Mingsong</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Zhang</surname> <given-names>Zhimin</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Medical Genetics, Zunyi Medical University</institution>, <addr-line>Zunyi</addr-line>, <country>China</country></aff>
<aff id="aff2"><sup>2</sup><institution>School of Stomatology, Zunyi Medical University</institution>, <addr-line>Zunyi</addr-line>, <country>China</country></aff>
<aff id="aff3"><sup>3</sup><institution>School of Medicine &#x0026; Nursing, Huzhou University</institution>, <addr-line>Huzhou</addr-line>, <country>China</country></aff>
<author-notes>
<corresp id="c001">&#x002A;Correspondence: Zhimin Zhang, <email>zzm_zhangzhimin@126.com</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>09</day>
<month>10</month>
<year>2024</year>
</pub-date>
<volume>3</volume>
<issue>1</issue>
<fpage>49</fpage>
<lpage>57</lpage>
<history>
<date date-type="received">
<day>22</day>
<month>07</month>
<year>2024</year>
</date>
<date date-type="accepted">
<day>30</day>
<month>09</month>
<year>2024</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2024 Shuai, Chen, Wu and Zhang.</copyright-statement>
<copyright-year>2024</copyright-year>
<copyright-holder>Shuai, Chen, Wu and Zhang</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/"><p>&#x00A9; The Author(s). 2024 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.</p></license>
</permissions>
<abstract>
<p>This paper reviews the application and prospect of deep learning in the study of facial aging. Population aging has become one of the prominent social problems in China. Facial aging is not only the main aspect of biomedical research but also plays an important role in the field of social life. The development of deep learning technology provides new methods and tools for the study of facial aging. The study of facial aging mainly includes two aspects: age estimation and aging synthesis. Age estimation predicts the real age of a human face by extracting facial features and establishing a model, while aging synthesis generates an image that simulates the change of the human face over time. Deep learning models, particularly Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs), have exhibited robust performance in both computational tasks and strong potential in age-related pattern extraction and temporal facial transformation modeling. Although deep learning has achieved remarkable results in the study of facial aging, it still faces some challenges, such as the difficulty of obtaining data, the irreversibility of the aging process, and individual differences. In the future, we may need to continue to optimize the algorithm to improve the generalization ability and accuracy of the model, while considering personalized features. The continuous progress and innovation of deep learning technology will bring more possibilities to the research of facial aging and help to solve the challenges brought by the aging of the population.</p>
</abstract>
<kwd-group>
<kwd>deep learning</kwd>
<kwd>face recognition</kwd>
<kwd>face synthesis</kwd>
<kwd>aging</kwd>
<kwd>facial aging</kwd>
</kwd-group>
<counts>
<fig-count count="0"/>
<table-count count="3"/>
<equation-count count="0"/>
<ref-count count="52"/>
<page-count count="9"/>
<word-count count="6244"/>
</counts>
</article-meta>
</front>
<body>
<sec id="S1" sec-type="intro">
<title>Introduction</title>
<p>Population aging has emerged as one of the most pressing social issues confronting China in the 21st century. Aging is an inevitable biological process that all humans experience (<xref ref-type="bibr" rid="B1">1</xref>). It is also a significant risk factor for a range of chronic diseases, including neurodegenerative diseases, cardiovascular and cerebrovascular diseases, osteoarthritis, and chronic kidney disease (<xref ref-type="bibr" rid="B2">2</xref>). As an important carrier of biological information, the face conveys various attributes such as identity, age, gender, expression, and emotion. Over time, facial appearance inevitably changes with age (<xref ref-type="bibr" rid="B3">3</xref>).</p>
<p>Facial aging serves not only as a health indicator in biomedicine (<xref ref-type="bibr" rid="B4">4</xref>) but also as a topic of considerable societal interest. Research on facial aging primarily encompasses two areas: age estimation and aging synthesis. Most traditional age estimation algorithms consist of two stages: age feature extraction and age estimation (<xref ref-type="bibr" rid="B5">5</xref>). There are two critical requirements for generating aging images: the accuracy of the estimated age and the consistency of the image identity (<xref ref-type="bibr" rid="B6">6</xref>). Image-based human age estimation has a wide array of practical applications, including demographic data collection in supermarkets and other public areas, age-specific user interfaces, age-oriented advertising, and the identification of individuals using old ID photos (<xref ref-type="bibr" rid="B7">7</xref>). However, predicting the age of a given facial image is challenging due to the slow and complex nature of human facial aging, which is influenced by numerous internal and external factors (<xref ref-type="bibr" rid="B8">8</xref>). To address the needs of scientific research and societal demands, the technology that employs computer analysis to examine the age characteristics of facial images over time and synthesize images that aesthetically reflect natural aging or rejuvenation is called face aging synthesis (<xref ref-type="bibr" rid="B9">9</xref>).</p>
<p>Deep learning is a subfield of machine learning (<xref ref-type="bibr" rid="B10">10</xref>) that focuses on modeling specific real-world issues using multi-layer neural networks designed to simulate the human nervous system (<xref ref-type="bibr" rid="B11">11</xref>). Commonly employed models include convolutional neural networks (CNNs) (<xref ref-type="bibr" rid="B12">12</xref>), recurrent neural networks (<xref ref-type="bibr" rid="B13">13</xref>), generative adversarial networks (GANs) (<xref ref-type="bibr" rid="B14">14</xref>), and long short-term memory networks (<xref ref-type="bibr" rid="B15">15</xref>), among others. The concept and application of neural networks began to emerge in the 1980s (<xref ref-type="bibr" rid="B16">16</xref>). Upon entering the 21st century, with the arrival of the era of big data, copious data offers abundant training resources for deep learning (<xref ref-type="bibr" rid="B17">17</xref>). Additionally, the continuous enhancement of optimization algorithms also offers significant support for the advancement of deep learning (<xref ref-type="bibr" rid="B18">18</xref>). Currently, deep learning has achieved remarkable success across various fields, including computer vision, speech recognition, and natural language processing (<xref ref-type="bibr" rid="B19">19</xref>). In the medical domain, deep learning technology is widely used for medical image analysis, disease diagnosis, and the formulation of treatment plans (<xref ref-type="bibr" rid="B20">20</xref>). Therefore, we review the applications of deep learning in facial aging from two aspects: aging face recognition and aging face synthesis, with the aim of further understanding the scientific research progress in this field.</p>
</sec>
<sec id="S2">
<title>The application of deep learning in the recognition and synthesis of aging faces is based on the training and testing of face datasets</title>
<p>In deep learning, datasets serve as the foundation for training neural networks, making the quality and size of the dataset critically important. Therefore, for tasks such as face age estimation and aging face synthesis, it is essential to collect a substantial number of accurately labeled facial images. A deep network typically demands millions of training samples to ensure both accuracy and generalization during training. Different datasets have unique characteristics and contain a vast array of facial images representing various ages, genders, races, and expressions. Additionally, the age label in these images may not reflect the exact age of the individual but rather their current age group. To meet diverse requirements in face-aging research, researchers frequently need to utilize different datasets to train their deep learning networks. Some datasets concentrate on the facial image details of elderly people, while others cover faces of different age groups. As a result, researchers must carefully select the appropriate dataset based on the specific characteristics of the problem at hand and the goals they aim to achieve. This ensures that the deep learning network model can effectively identify and synthesize aging faces. Notable datasets used in this field include Cross-Age Celebrity Dataset (CACD), Internet Movie DatabaseWikipedia (IMDB-WIKI), MORPH-II, FG-NET, UTKFace dataset, Labeled Faces in the Wild (LFW), etc. A detailed description of each of these datasets can be found in <xref ref-type="table" rid="T1">Table 1</xref>.</p>
<table-wrap position="float" id="T1">
<label>TABLE 1</label>
<caption><p>Commonly utilized facial dataset.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Name</td>
<td valign="top" align="left">Collecting agency</td>
<td valign="top" align="left">Data source</td>
<td valign="top" align="left">Image quantity</td>
<td valign="top" align="left">Age</td>
<td valign="top" align="left">Dataset characteristics</td>
<td valign="top" align="left">Advantage</td>
<td valign="top" align="left">Limitation</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">CACD (<xref ref-type="bibr" rid="B21">21</xref>, <xref ref-type="bibr" rid="B22">22</xref>)</td>
<td valign="top" align="left">Department of Computer Science, University of Maryland</td>
<td valign="top" align="left">Internet</td>
<td valign="top" align="left">163241</td>
<td valign="top" align="left">1662 years old</td>
<td valign="top" align="left">The time when celebrities&#x2019; photos are published on the Internet may not be accurate, and there is some data noise.</td>
<td valign="top" align="left">CACD covers a broad range of age and gender characteristics.</td>
<td valign="top" align="left">Lacks adaptability to the general population.</td>
</tr>
<tr>
<td valign="top" align="left">IMDB-WIKI (<xref ref-type="bibr" rid="B23">23</xref>)</td>
<td valign="top" align="left">Microsoft Research Asia</td>
<td valign="top" align="left">Internet</td>
<td valign="top" align="left">524230</td>
<td valign="top" align="left">Average age 32</td>
<td valign="top" align="left">Wide range of ages with age labels.</td>
<td valign="top" align="left">1. Large-scale and diverse samples<break/>2. Accurate age labeling</td>
<td valign="top" align="left">There may be insufficient samples in certain age groups, causing bias in the model during training.</td>
</tr>
<tr>
<td valign="top" align="left">FG-NET (<xref ref-type="bibr" rid="B24">24</xref>)</td>
<td valign="top" align="left">Autonomous University of Barcelona, Spain</td>
<td valign="top" align="left">Camera shooting and photo scanning</td>
<td valign="top" align="left">1002</td>
<td valign="top" align="left">0&#x2013;69 years old</td>
<td valign="top" align="left">The total number of face images is too small, the age distribution is not balanced, and the image quality is very different.</td>
<td valign="top" align="left">1. Accurate age labeling<break/>2. Images come from a variety of environments and shooting conditions, such as different lighting and expressions, adding to the authenticity of the dataset</td>
<td valign="top" align="left">1. Small sample size<break/>2. Uneven age distribution</td>
</tr>
<tr>
<td valign="top" align="left">UTKFace (<xref ref-type="bibr" rid="B25">25</xref>, <xref ref-type="bibr" rid="B26">26</xref>)</td>
<td valign="top" align="left">Purdue University</td>
<td valign="top" align="left">Internet</td>
<td valign="top" align="left">20000</td>
<td valign="top" align="left">0&#x2013;116 years old</td>
<td valign="top" align="left">The faces in the images have different expressions and postures.</td>
<td valign="top" align="left">1. Tagged with age, gender, and race<break/>2. The dataset covers different ages, genders, and ethnicities</td>
<td valign="top" align="left">The images in the dataset do not contain background or environmental information</td>
</tr>
<tr>
<td valign="top" align="left">MORPH Album I (<xref ref-type="bibr" rid="B27">27</xref>)</td>
<td valign="top" align="left">Department of Computer Science, University of North Carolina Wilmington</td>
<td valign="top" align="left">Public data</td>
<td valign="top" align="left">1724</td>
<td valign="top" align="left">16&#x2013;77 years old</td>
<td valign="top" align="left">All individuals have at least one duplicate image</td>
<td valign="top" align="left">1. Tagged with age, gender, and race<break/>2. The images cover different age groups of the same individual</td>
<td valign="top" align="left">1. The dataset consists primarily of black and white Americans, with limited racial diversity globally.<break/>2. Small sample size.</td>
</tr>
<tr>
<td valign="top" align="left">MORPH Album II (<xref ref-type="bibr" rid="B27">27</xref>)</td>
<td valign="top" align="left"/><td valign="top" align="left">Manual photograph collection</td>
<td valign="top" align="left">55134</td>
<td valign="top" align="left">16&#x2013;77 years old</td>
<td valign="top" align="left">The age label is accurate, but the number of pictures is small</td>
<td valign="top" align="left">1. Tagged with age, gender, and race<break/>2. Compared to MORPH Album I, the amount of data and the number of identities are significantly increased<break/>3. Higher image quality</td>
<td valign="top" align="left">The background is relatively simple.</td>
</tr>
<tr>
<td valign="top" align="left">CASIA-WebFace (<xref ref-type="bibr" rid="B28">28</xref>, <xref ref-type="bibr" rid="B29">29</xref>)</td>
<td valign="top" align="left">Institute of Automation, Chinese Academy of Sciences</td>
<td valign="top" align="left">Internet</td>
<td valign="top" align="left">494414</td>
<td valign="top" align="left">/</td>
<td valign="top" align="left">The scale and direction of the picture are inconsistent, but the data distribution is more uniform.</td>
<td valign="top" align="left">1. Large-scale and diverse samples<break/>2. Available for free</td>
<td valign="top" align="left">The data set does not provide detailed annotation information such as age, gender, race, etc.</td>
</tr>
<tr>
<td valign="top" align="left">LFW (<xref ref-type="bibr" rid="B30">30</xref>, <xref ref-type="bibr" rid="B31">31</xref>)</td>
<td valign="top" align="left">UMass Amherst&#x2019;s Computer Vision Laborator</td>
<td valign="top" align="left">Internet</td>
<td valign="top" align="left">13233</td>
<td valign="top" align="left">/</td>
<td valign="top" align="left">The images are from a natural scene.</td>
<td valign="top" align="left">1. Image pairs are divided into positive sample pairs (same person) and negative sample pairs (different people), providing a unified evaluation standard.<break/>2. The images are from a natural scene.</td>
<td valign="top" align="left">Lack of tags with other additional attributes such as age, gender, race, emotion, etc.</td>
</tr>
<tr>
<td valign="top" align="left">MegaAge (<xref ref-type="bibr" rid="B32">32</xref>, <xref ref-type="bibr" rid="B33">33</xref>)</td>
<td valign="top" align="left">SenseTime</td>
<td valign="top" align="left">Internet</td>
<td valign="top" align="left">41941</td>
<td valign="top" align="left">0&#x2013;70 years old</td>
<td valign="top" align="left">Provides a variety of unrestricted scene images.</td>
<td valign="top" align="left">1. Large-scale and diverse samples<break/>2. The dataset contains a specialized subset of Asian faces.</td>
<td valign="top" align="left">Only age tags are provided, without additional attribute tags, such as gender, emotion, race, etc.</td>
</tr>
</tbody>
</table></table-wrap>
</sec>
<sec id="S3">
<title>The application of deep learning in facial aging recognition</title>
<p>Most existing algorithms require a large dataset with accompanying age labels, which limits their applicability to unlabeled or weakly labeled data. To tackle this limitation, Hu Z et al. (<xref ref-type="bibr" rid="B3">3</xref>) proposed a novel learning framework that leverages weakly labeled data through a deep CNN. This framework employs Kullback-Leibler divergence to integrate age difference information from image pairs of the same individual. By utilizing a combination of entropy loss and cross-entropy loss, it effectively quantifies the disparity between the predicted label distribution and the true distribution, thereby enhancing age estimation performance.</p>
<p>The deep CNN architecture proposed by Alsaleh A et al. (<xref ref-type="bibr" rid="B34">34</xref>) produces more accurate results in a shorter time frame. This research employs an optimized CNN with four convolutional layers and two fully connected (FC) layers for age-group classification. Other CNN architectures, such as ranking CNN, VGG-Face network, GoogLeNet, residual networks (RoR), and deep EXpression (DEX), have also been used in related research. The CNN architecture consists of convolutional layers, pooling layers, and FC layers. The convolutional layers apply multiple filters to images to extract features, while reducing the number of parameters and minimizing the risk of overfitting. The pooling layers reduce the spatial size of the convolutional outputs without losing features, providing position stability to the network.</p>
<p>The method proposed by Sajid M et al. (<xref ref-type="bibr" rid="B35">35</xref>) for recognizing age-discriminative facial images achieved overall accuracy rates of 93.86% and 92.98% on the MORPH II and FERET datasets, respectively. This significantly enhanced the recognition accuracy of facial images categorized by age.</p>
<p>Vikas S et al. (<xref ref-type="bibr" rid="B36">36</xref>) proposed a deep CNN method based on transfer learning, utilizing multiple public datasets (such as IMDB-WIKI and MORPH-II) to train and test the model. The study compared the performance of different pre-trained models (such as VGG and ResNet) and explored the impact of various fine-tuning strategies on the prediction results. This transfer learning-based deep CNN approach achieved excellent results in age and gender prediction, significantly reducing training time and data requirements, while transfer learning greatly enhanced the model&#x2019;s performance (<xref ref-type="table" rid="T2">Table 2</xref>).</p>
<table-wrap position="float" id="T2">
<label>TABLE 2</label>
<caption><p>Aging facial recognition methods and evaluation.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Researcher</td>
<td valign="top" align="left">Dataset source</td>
<td valign="top" align="left">Algorithm</td>
<td valign="top" align="left">Advantage</td>
<td valign="top" align="left">Limitation</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Hu Z et al. (<xref ref-type="bibr" rid="B3">3</xref>)</td>
<td valign="top" align="left">Face dataset constructed by the authors themselves</td>
<td valign="top" align="left">Convolutional neural network</td>
<td valign="top" align="left">The method using age difference information to estimate face age is more accurate.</td>
<td valign="top" align="left">The method has a strong dependence on data with age labels.</td>
</tr>
<tr>
<td valign="top" align="left">Alsaleh A et al. (<xref ref-type="bibr" rid="B34">34</xref>)</td>
<td valign="top" align="left">UTKFace and Facial-age</td>
<td valign="top" align="left">Convolutional neural network</td>
<td valign="top" align="left">The number of parameters is reduced, the computing resources are saved, and the training time is shortened.</td>
<td valign="top" align="left">Facial expressions are influenced by cultural background and personality factors.</td>
</tr>
<tr>
<td valign="top" align="left">Sajid M et al. (<xref ref-type="bibr" rid="B35">35</xref>)</td>
<td valign="top" align="left">MORPH and FERET</td>
<td valign="top" align="left">Deep convolutional neural networks</td>
<td valign="top" align="left">Age group estimation based on facial asymmetry is realized.</td>
<td valign="top" align="left">The asymmetry of certain facial features (such as due to trauma) can affect age estimation.</td>
</tr>
<tr>
<td valign="top" align="left">Vikas S et al. (<xref ref-type="bibr" rid="B36">36</xref>)</td>
<td valign="top" align="left">IMDB-WIKI and MORPH-II</td>
<td valign="top" align="left">Deep convolutional neural network based on transfer learning</td>
<td valign="top" align="left">The combination of transfer learning and multi-task learning improves the accuracy of age prediction.</td>
<td valign="top" align="left">The image resolution requirements are high, and the quality and imbalance of the dataset can easily affect the model&#x2019;s prediction performance.</td>
</tr>
<tr>
<td valign="top" align="left">Yulan Deng (<xref ref-type="bibr" rid="B37">37</xref>)</td>
<td valign="top" align="left">MORPH II and FG-NET datasets</td>
<td valign="top" align="left">Convolutional neural network</td>
<td valign="top" align="left">1. The influence of different gender characteristics on the age assessment model was overcome.<break/>2. Take advantage of the continuity and orderliness of age labels.<break/>3. A 3D face age assessment model is proposed.</td>
<td valign="top" align="left">Occlusion, expression, makeup, and living environment affect age assessment.</td>
</tr>
<tr>
<td valign="top" align="left">Chen Kaiyan (<xref ref-type="bibr" rid="B38">38</xref>)</td>
<td valign="top" align="left">FGNET, UTKFace, CACD, AFAD, Fairface, etc.</td>
<td valign="top" align="left">Convolutional neural network</td>
<td valign="top" align="left">All features of the extracted whole and local facial features are integrated to extract finer-grained features.</td>
<td valign="top" align="left">Additional facial touches (e.g., beard, glasses, hair) are not pre-treated.</td>
</tr>
<tr>
<td valign="top" align="left">Sheng Mengjiao et al. (<xref ref-type="bibr" rid="B39">39</xref>)</td>
<td valign="top" align="left">CACD and MORPH Album</td>
<td valign="top" align="left">Generative adversarial network</td>
<td valign="top" align="left">The aging accuracy of the cross-age face synthesis method is relatively high.</td>
<td valign="top" align="left">High computational complexity.</td>
</tr>
<tr>
<td valign="top" align="left">Zhang MM et al. (<xref ref-type="bibr" rid="B40">40</xref>)</td>
<td valign="top" align="left">A facial database of 10,529 images from 1821 patients in the hospital where the author worked</td>
<td valign="top" align="left">Convolutional neural network</td>
<td valign="top" align="left">Models are trained to consider more specific and subtle facial features.</td>
<td valign="top" align="left">The face database trained by the model is unevenly distributed, with few males and a serious imbalance in age distribution.</td>
</tr>
<tr>
<td valign="top" align="left">Shi C et al. (<xref ref-type="bibr" rid="B41">41</xref>)</td>
<td valign="top" align="left">MORPH, AlbumII, FG-NET, and Adience datasets</td>
<td valign="top" align="left">Convolution age estimation framework based on attention</td>
<td valign="top" align="left">Face regions containing rich age-specific information are extracted to capture the global and local information of images better.</td>
<td valign="top" align="left">High computational complexity.</td>
</tr>
<tr>
<td valign="top" align="left">Beichen Zhang et al. (<xref ref-type="bibr" rid="B42">42</xref>)</td>
<td valign="top" align="left">CACD and AFAD</td>
<td valign="top" align="left">Convolutional neural networks combined with deep regression forests</td>
<td valign="top" align="left">Age estimation and head pose estimation are combined for age estimation in video.</td>
<td valign="top" align="left">There may be some videos that do not contain anything that meets our frontal view criteria.</td>
</tr>
<tr>
<td valign="top" align="left">Huynh HT et al. (<xref ref-type="bibr" rid="B43">43</xref>)</td>
<td valign="top" align="left">MMLAB and Mega age_Asian</td>
<td valign="top" align="left">Wide ResNet</td>
<td valign="top" align="left">The model can be easily applied to non-Asian populations. It has a better error rate and better accuracy rate.</td>
<td valign="top" align="left">The model performs poorly on non-Asian ethnic groups.</td>
</tr>
</tbody>
</table></table-wrap>
<p>To address the limitations of gender, race, posture, and lighting variations in face age estimation, Yulan Deng (<xref ref-type="bibr" rid="B37">37</xref>) utilized data captured by a Kinect sensor in RGB-D format to capture 3D facial data. A compact age network with four convolutional layers and two FC layers was employed to simplify the deep learning network. The convolutional layers featured smaller kernel sizes and strides, reducing the size of the age feature maps. A deep regression function was used to convert the fused features into precise age values. Overall, this method combines model simplification, multi-feature fusion, three-dimensional age feature learning, a coarse-to-fine strategy, deep regression functions, and fused feature transformation techniques to improve the performance and robustness of age estimation models.</p>
<p>Chen Kaiyan (<xref ref-type="bibr" rid="B38">38</xref>) proposed a depth-separable gated CNN model that combines global and local facial features, considering the texture and geometric features of different regions of the face for feature selection and age category prediction. It also plays an important role in reducing noise and improving image quality.</p>
<p>Sheng Mengjiao et al. (<xref ref-type="bibr" rid="B39">39</xref>) proposed two methods based on generative adversarial networks (GAN). The first method combines GAN with Ranking-CNN, consisting of four modules: a generator, a discriminator, a pre-trained AlexNet network, and Ranking-CNN. Experimental results on the CACD dataset show that this method improves aging accuracy by 4.1% compared to the identity-preserved conditional generative adversarial networks (IPCGANs) method. The second method introduces a feature disentangling GAN and a multi-task discriminator, using GAN and feature decoupling to improve cross-age face recognition. This approach addresses the issue of requiring large training datasets, achieving high recognition accuracy of 98.23% with fewer training samples.</p>
<p>The framework proposed by Shi C et al. (<xref ref-type="bibr" rid="B41">41</xref>) combines attention-based convolution and Swin Transformer, utilizing a hierarchical structure similar to CNN along with a window attention mechanism to handle images at different scales. It integrates shallow convolutions with a multi-head attention mechanism, enriching the features of images and learning regions with age-specific information. Compared to traditional convolution, attention-based convolution allows for the extraction of important features from sparse data, enabling it to handle multiple segments of input information and capture dependencies between features at long-range intervals. Additionally, it can process larger perceptual fields, enhancing the model&#x2019;s ability to capture both global and local information, thereby improving feature dependencies (<xref ref-type="bibr" rid="B41">41</xref>).</p>
<p>Beichen Zhang et al. (<xref ref-type="bibr" rid="B42">42</xref>) proposed an improvement in age estimation for faces in videos by using deep regression forests (DRF) for age estimation and multi-loss CNNs for head pose estimation. The system estimates age and head pose frame by frame in the video, performing age estimation for frames within a specified threshold for head pose. Compared to traditional methods, it achieves a smaller standard deviation of estimation error, offering better accuracy and reliability for age estimation of faces in videos.</p>
<p>The algorithm proposed by Huynh HT et al. (<xref ref-type="bibr" rid="B43">43</xref>) rivals Microsoft&#x2019;s application programming interface (API) estimator, achieving an improvement of 1% in gender accuracy. This model utilizes deep learning and computer vision algorithms to achieve high precision in age and gender estimation, with a lower error rate and reasonable accuracy.</p>
<p>The deep CNN trained on 3D facial images by Xia X et al. (<xref ref-type="bibr" rid="B52">52</xref>) achieved high accuracy in predicting chronological age and perceiving age, with mean absolute deviation of 2.79 years and 2.90 years, respectively. The performance of the CNN model surpasses that of linear models and demonstrates consistency in age estimation across different cohorts. Furthermore, compared to actual age predictors, the CNN model reveals a stronger association between lifestyle and health parameters and aging.</p>
</sec>
<sec id="S4">
<title>Application of deep learning in face aging synthesis</title>
<p>Han S et al. (<xref ref-type="bibr" rid="B44">44</xref>) synthesized faces of different ages from a dataset using a style-based age transformation model, which is capable of generating faces ranging from 10 to 70 years old.</p>
<p>Wang W et al. (<xref ref-type="bibr" rid="B45">45</xref>) proposed a recurrent facial aging (RFA) framework that takes a single image as input and generates a series of aging faces, making the aging process more realistic through smooth transitional states. This framework uses a three-layer gated recurrent unit (GRU) as the basic recurrent module to generate smooth transitions. The bottom layer serves as an encoder, encoding the input face into a high-dimensional variable, while the top layer acts as a decoder, decoding the hidden variable into aging faces. The intermediate layer has high dimensionality, capable of simulating complex dynamic aging patterns, and serves as the autoregressive input for the next iteration to synthesize aged faces. Additionally, the RFA framework generates smooth transitional faces while preserving identity information by projecting normalized images into a feature manifold space and transferring textures from the nearest neighbors in that space.</p>
<p>Yang H et al. (<xref ref-type="bibr" rid="B46">46</xref>) addressed the issues of aging accuracy and identity persistence by proposing a novel age progression method based on GAN. This method uses a pyramid-structured discriminator to simulate overall muscle relaxation and local fine wrinkles and employs an adversarial learning scheme to train a single generator along with multiple parallel discriminators, thereby generating smooth continuous aging sequences. Moreover, an identity loss function is introduced to retain personalized information and ensure identity persistence, along with pixel-level L2 loss and total variation regularization loss to bridge the input-output gap and promote spatial smoothness.</p>
<p>Zhang J et al. (<xref ref-type="bibr" rid="B47">47</xref>) improved identity preservation and age proximity in 3D facial geometry using a Mesh Wasserstein generative adversarial network (WGAN) architecture, comprises an identity encoder, an age mapping network, and a decoder. The distinction of the Mesh WGAN architecture from others is its inclusion of latent and style age codes. The aging facial geometries produced by this method exhibit more consistent identity and age features, while also enabling continuous age transformations. The use of adversarial loss with multi-task gradient penalties stabilizes the training process, further enhancing the generator&#x2019;s performance.</p>
<p>Sharma N et al. (<xref ref-type="bibr" rid="B25">25</xref>) combined Attention GAN and super-resolution generative adversarial network (SRGAN) for the facial aging generation. The Attention GAN employs two independent subnetworks in the generator to create an attention mask and a content mask, which are then multiplied with the input image to achieve the desired result. The method reduces computational complexity and training time using regular expression filters, resulting in high-quality facial aging images generated with less computation time and storage space, thus yielding realistic super-resolution facial aging images.</p>
<p>Grigory A et al. (<xref ref-type="bibr" rid="B48">48</xref>) proposed a face aging recognition method based on conditional generative adversarial networks (cGANs). The cGAN consists of a generator and a discriminator, where the generator is used to produce facial images at different age stages, and the discriminator is used to distinguish between real and synthetic images. On top of the standard GAN, cGAN introduces age labels during the generation and discrimination processes, enabling the generated images to exhibit the characteristics of the target age. To ensure the quality of the generated images, the loss function of cGAN not only includes adversarial loss, which helps create realistic facial images, but also incorporates identity preservation loss and age classification loss. The identity preservation loss ensures that the generated images retain the identity consistency of the input image without altering the core facial features, while the age classification loss uses an age classifier to supervise the age characteristics of the generated images, ensuring that they reflect the target age group. The images generated by this method are visually realistic and match the characteristics of the target age group, outperforming traditional methods in cross-age face recognition tasks (<xref ref-type="table" rid="T3">Table 3</xref>).</p>
<table-wrap position="float" id="T3">
<label>TABLE 3</label>
<caption><p>Methods and evaluation of aging face synthesis.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Researcher</td>
<td valign="top" align="left">Dataset source</td>
<td valign="top" align="left">Algorithm and main architecture</td>
<td valign="top" align="left">Advantage</td>
<td valign="top" align="left">Limitation</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Han S, Guo Y, Zhou X et al. (<xref ref-type="bibr" rid="B44">44</xref>)</td>
<td valign="top" align="left">The 120 participants (including 60 men and 60 women) were between the ages of 18 and 28</td>
<td valign="top" align="left">Generative adversarial network</td>
<td valign="top" align="left">The smallest artifacts are excellent at generating high-quality images and capturing natural expressions.</td>
<td valign="top" align="left">There are limitations in accurately editing expressions, resulting in low-quality images with obvious artifacts.</td>
</tr>
<tr>
<td valign="top" align="left">Wang W, Yan Y, Cui Z et al. (<xref ref-type="bibr" rid="B45">45</xref>)</td>
<td valign="top" align="left">LFW, Google and Bing image search engines, MORPH Aging Dataset, and CACD</td>
<td valign="top" align="left">Recurrent neural networks</td>
<td valign="top" align="left">A cyclic face aging (RFA) framework composed of three-layer GRUs is proposed to better preserve identity information. Capable of modeling complex dynamic aging patterns.</td>
<td valign="top" align="left">High training complexity; the quality of generated images is inconsistent.</td>
</tr>
<tr>
<td valign="top" align="left">Yang H, Huang D, Wang Y (<xref ref-type="bibr" rid="B46">46</xref>)</td>
<td valign="top" align="left">MORPH, CACD, and FG-NET</td>
<td valign="top" align="left">Generative adversarial network</td>
<td valign="top" align="left">A discriminator of pyramid structures is designed to estimate advanced facial representations in a more refined manner. More accurate, reliable, and realistic aging effects.</td>
<td valign="top" align="left">Other relevant variables may significantly influence facial aging but cannot be taken into account, such as health status, lifestyle, and work environment.</td>
</tr>
<tr>
<td valign="top" align="left">Zhang J, Zhou K, Luximon Y (<xref ref-type="bibr" rid="B47">47</xref>)</td>
<td valign="top" align="left">HeadSpace, FaceScape, FaceWarehouse, BU-3DFE, Florences, and Adult-Heads</td>
<td valign="top" align="left">Generative adversarial network</td>
<td valign="top" align="left">3D facial geometry age conversion can better predict the facial geometry of different age groups.</td>
<td valign="top" align="left">Facial aging geometry is influenced by facial hair, which can lead to inaccurate generation of facial areas with few training datasets. The age range of defined age groups (especially 30&#x2013;49 years) may be broad.</td>
</tr>
<tr>
<td valign="top" align="left">Sharma N, Sharma R, Jindal N (<xref ref-type="bibr" rid="B25">25</xref>)</td>
<td valign="top" align="left">UTKFace, CACD, FGNET, IMDB-WIKI, and CelebA</td>
<td valign="top" align="left">Generative adversarial network</td>
<td valign="top" align="left">Well-retained identity and produced super-resolution facial aging images.</td>
<td valign="top" align="left">High data requirements; the training process is prone to instability.</td>
</tr>
<tr>
<td valign="top" align="left">Grigory A et al. (<xref ref-type="bibr" rid="B48">48</xref>)</td>
<td valign="top" align="left">MORPH and CACD</td>
<td valign="top" align="left">Conditional generative adversarial network</td>
<td valign="top" align="left">The generated images are visually realistic and match the characteristics of the target age group.</td>
<td valign="top" align="left">High data requirements; the training process is prone to instability.</td>
</tr>
<tr>
<td valign="top" align="left">Bowen Wang (<xref ref-type="bibr" rid="B49">49</xref>)</td>
<td valign="top" align="left">UTKFace</td>
<td valign="top" align="left">Conditional loop adversarial network</td>
<td valign="top" align="left">Age accuracy and identity consistency are higher. Wrinkles and skin are closer to reality, pictures are sharper, and the way images age is more varied and personalized.</td>
<td valign="top" align="left">The computational complexity is relatively high.</td>
</tr>
<tr>
<td valign="top" align="left">Ran Song (<xref ref-type="bibr" rid="B6">6</xref>)</td>
<td valign="top" align="left">UTKFace face dataset and CUHK Student dataset</td>
<td valign="top" align="left">Generative adversarial network</td>
<td valign="top" align="left">Make full use of low-dimensional features and get high-dimensional features; the model has a better generation result.</td>
<td valign="top" align="left">The image will inevitably be distorted.</td>
</tr>
<tr>
<td valign="top" align="left">Yang Zeng Guo (<xref ref-type="bibr" rid="B50">50</xref>)</td>
<td valign="top" align="left">IMDB-WIKI dataset</td>
<td valign="top" align="left">Dual generative adversarial network</td>
<td valign="top" align="left">The synthetic faces showed differences in aging rate due to gender differences.</td>
<td valign="top" align="left">The perception of face aging in profile images is insufficient.</td>
</tr>
<tr>
<td valign="top" align="left"/><td valign="top" align="left">A dataset containing 6914 young face images was formed by combining CACD, IMDB-WIKI, FG-NET, and UTKFace, which were &#x003C;20 years old, in four publicly available datasets</td>
<td valign="top" align="left">Incremental generation of adversarial networks</td>
<td valign="top" align="left">The fidelity of the output face image is improved by increasing the attention mechanism and focusing the change of face age on the relevant regions that affect the aging of the face.</td>
<td valign="top" align="left">The algorithm efficiency is low, and the network architecture is slightly complex.</td>
</tr>
<tr>
<td valign="top" align="left">Qiujian Bai (<xref ref-type="bibr" rid="B51">51</xref>)</td>
<td valign="top" align="left">Training uses the UTK training set, and testing uses FGNET and UTK test sets</td>
<td valign="top" align="left">Feature learning generates adversarial networks</td>
<td valign="top" align="left">The structure of the model is simplified, the identity information is better retained, and the robustness of the model is increased.</td>
<td valign="top" align="left">At high ages, the texture details are not rich and the age accuracy is low.</td>
</tr>
<tr>
<td valign="top" align="left"/><td valign="top" align="left">The UTK face database was used for training, and the FGNET face database was used for testing</td>
<td valign="top" align="left">Generative adversarial network</td>
<td valign="top" align="left">The synthetic image is more natural, the image quality is improved, and more abundant texture information can be generated.</td>
<td valign="top" align="left">In the low-age group, the age accuracy decreased significantly, and the probability of equal error in face recognition increased.</td>
</tr>
</tbody>
</table></table-wrap>
<p>Bowen Wang (<xref ref-type="bibr" rid="B49">49</xref>) improved the self-attention mechanism of the synthesizer and integrated it into the GAN algorithm to better simulate long-range correlations between pixels, enhancing the performance of the GAN. By directly modeling the long-range correlations between pixels using neural networks, the self-attention mechanism of the synthesizer was further refined. This mechanism, while possessing higher computational complexity, demonstrates stronger modeling capability. By projecting and reducing the dimensionality of the correlation matrix, the self-attention mechanism was optimized, significantly reducing computational complexity and improving the generation quality of the generative model. This approach achieved the synthesis of aging images across multiple age groups, showing good results in image authenticity, age accuracy, and identity consistency.</p>
<p>Yang ZengGuo (<xref ref-type="bibr" rid="B50">50</xref>) proposed the probabilistic feature analysis-graph convolution (PFA-GC) algorithm, which more realistically simulates the aging process by focusing on changes in facial muscles and skin, resulting in clearer facial details and consistent image colors. The results indicate that the PFA-GC algorithm outperforms other methods in aging accuracy. However, its limitation lies in the feature aging module, which may lack generalization capability as it requires multiple images of the same individual at different ages for training.</p>
<p>Training becomes challenging due to the vanishing gradient phenomenon, where the gradient approaches zero during backpropagation. Qiujian Bai (<xref ref-type="bibr" rid="B51">51</xref>) introduced residual modules to overcome this limitation. Each module consists of 1 &#x00D7; 1, 3 &#x00D7; 3, and 1 &#x00D7; 1 convolutions. The first convolution reduces the dimensionality of the data, the second extracts features, and the third restores the data dimensionality. These modules allow gradients to propagate directly from higher layers to lower layers, enabling the network to adjust the modules that need modification and stabilize the training process while maintaining stability throughout. The use of residual modules demonstrates higher performance in image quality and precision. This structure allows for deeper network designs while keeping memory usage constant, enhancing image quality and textural information, making the synthesized images more natural and of higher quality, thereby improving the model&#x2019;s generative capability.</p>
</sec>
<sec id="S5">
<title>Conclusion and prospect</title>
<p>To sum up, deep learning is primarily applied in the research of facial aging for age estimation and aging synthesis. In recent years, steady progress has been made in the study of age estimation and aging synthesis. From the literature reviewed, it is clear that there is no consensus on the best algorithm type for estimation and synthesis tasks. We have described each algorithm used by researchers, highlighting their advantages and limitations, which will aid in selecting the appropriate algorithms for future work in age estimation and aging synthesis. The main challenges faced in age estimation include the varying rates of facial appearance changes within the same age group. Additionally, obtaining a adequate amount of training data for age estimation is quite difficult. The challenges encountered in aging synthesis primarily stem from the slow and irreversible nature of the aging process, making it uncontrollable. Furthermore, each individual has a unique aging pattern influenced by various external factors, including weather conditions, health status, lifestyle, and genetic makeup.</p>
<p>During the age estimation process, factors such as facial occlusion, human expressions, and racial skin tones can affect the results. Future work needs to address these issues to develop better and more robust facial attribute recognition algorithms that can handle real-life situations. In facial aging synthesis, beyond considering changes in texture and skin color, personalized features such as lip thickness and the position of moles should also be taken into account to achieve a more realistic aging effect.</p>
</sec>
<sec id="S6">
<title>Conflicts of interest</title>
<p>The authors declare that the review was conducted in the absence of any commercial or financial relationship that could be constructed as a potential conflicts of interest.</p>
</sec>
<sec id="S7" sec-type="author-contributions">
<title>Author contributions</title>
<p>All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.</p>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="B1"><label>1.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bairakdar</surname> <given-names>MD</given-names></name> <name><surname>Tewari</surname> <given-names>A</given-names></name> <name><surname>Truttmann</surname> <given-names>MC</given-names></name></person-group>. <article-title>A meta-analysis of RNA-Seq studies to identify novel genes that regulate aging.</article-title> <source><italic>Exp Gerontol.</italic></source> (<year>2023</year>) <volume>173</volume>:<issue>112107</issue>. <pub-id pub-id-type="doi">10.1016/j.exger.2023.112107</pub-id> <pub-id pub-id-type="pmid">10653729</pub-id>.</citation></ref>
<ref id="B2"><label>2.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cao</surname> <given-names>X</given-names></name> <name><surname>Zhang</surname> <given-names>J</given-names></name> <name><surname>Ma</surname> <given-names>C</given-names></name> <name><surname>Li</surname> <given-names>X</given-names></name> <name><surname>Kuo</surname> <given-names>CL</given-names></name> <name><surname>Levine</surname> <given-names>ME</given-names></name> <etal/> </person-group> <article-title>Life course traumas and cardiovascular disease-the mediating role of accelerated aging.</article-title> <source><italic>Ann N Y Acad Sci</italic>.</source> (<year>2022</year>) <volume>1515</volume>(<issue>1</issue>):<fpage>208</fpage>&#x2013;<lpage>18</lpage>. <pub-id pub-id-type="doi">10.1111/nyas.14843</pub-id> <pub-id pub-id-type="pmid">10145586</pub-id>.</citation></ref>
<ref id="B3"><label>3.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hu</surname> <given-names>Z</given-names></name> <name><surname>Wen</surname> <given-names>Y</given-names></name> <name><surname>Wang</surname> <given-names>J</given-names></name> <name><surname>Wang</surname> <given-names>M</given-names></name> <name><surname>Hong</surname> <given-names>R</given-names></name> <name><surname>Yan</surname> <given-names>S</given-names></name></person-group>. <article-title>Facial age estimation with age difference.</article-title> <source><italic>IEEE Trans Image Process</italic>.</source> (<year>2017</year>) <volume>26</volume>(<issue>7</issue>):<fpage>3087</fpage>&#x2013;<lpage>97</lpage>. <pub-id pub-id-type="doi">10.1109/TIP.2016.2633868</pub-id></citation></ref>
<ref id="B4"><label>4.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sabina</surname> <given-names>U</given-names></name> <name><surname>Whangbo</surname> <given-names>TK</given-names></name></person-group>. <article-title>Edge-based effective active appearance model for real-time wrinkle detection.</article-title> <source><italic>Skin Res Technol</italic>.</source> (<year>2021</year>) <volume>27</volume>(<issue>3</issue>):<fpage>444</fpage>&#x2013;<lpage>52</lpage>. <pub-id pub-id-type="doi">10.1111/srt.12977</pub-id> <pub-id pub-id-type="pmid">8247305</pub-id>.</citation></ref>
<ref id="B5"><label>5.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lahza</surname> <given-names>H</given-names></name> <name><surname>Alsheikhy</surname> <given-names>AA</given-names></name> <name><surname>Said</surname> <given-names>Y</given-names></name> <name><surname>Shawly</surname> <given-names>T</given-names></name></person-group>. <article-title>A deep learning approach to predict chronological age.</article-title> <source><italic>Healthcare (Basel).</italic></source> (<year>2023</year>) <volume>11</volume>(<issue>3</issue>):<fpage>448</fpage>. <pub-id pub-id-type="doi">10.3390/healthcare11030448</pub-id> <pub-id pub-id-type="pmid">9914671</pub-id>.</citation></ref>
<ref id="B6"><label>6.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ran</surname> <given-names>S.</given-names></name></person-group> <source><italic>Face Aging Image Generation Based on Conditional Adversarial Autoencoder and Generative Adversarial Network.</italic></source> <publisher-loc>Shanghai</publisher-loc>: <publisher-name>Shanghai University of Finance and Economics</publisher-name> (<year>2023</year>).</citation></ref>
<ref id="B7"><label>7.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gong</surname> <given-names>C</given-names></name> <name><surname>Liu</surname> <given-names>R</given-names></name> <name><surname>Zhou</surname> <given-names>N</given-names></name> <name><surname>Luo</surname> <given-names>J</given-names></name> <name><surname>Kumar Jain</surname> <given-names>D</given-names></name></person-group>. <article-title>Smart memory storage solution and elderly oriented smart equipment design under deep learning.</article-title> <source><italic>Comput Intell Neurosci.</italic></source> (<year>2022</year>) <volume>2022</volume>:<issue>6448302</issue>. <pub-id pub-id-type="doi">10.1155/2022/6448302</pub-id> <pub-id pub-id-type="pmid">9110148</pub-id>.</citation></ref>
<ref id="B8"><label>8.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>da Cunha</surname> <given-names>ALG</given-names></name> <name><surname>Vasconcelos</surname> <given-names>R</given-names></name> <name><surname>Di Sessa</surname> <given-names>D</given-names></name> <name><surname>Sampaio</surname> <given-names>G</given-names></name> <name><surname>Ramalhoto</surname> <given-names>P</given-names></name> <name><surname>Zampieri</surname> <given-names>BF</given-names></name> <etal/> </person-group> <article-title>IncobotulinumtoxinA for the treatment of glabella and forehead dynamic lines: a real-life longitudinal case series.</article-title> <source><italic>Clin Cosmet Investig Dermatol.</italic></source> (<year>2023</year>) <volume>16</volume>:<fpage>697</fpage>&#x2013;<lpage>704</lpage>. <pub-id pub-id-type="doi">10.2147/CCID.S391709</pub-id> <pub-id pub-id-type="pmid">10040156</pub-id>.</citation></ref>
<ref id="B9"><label>9.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Azhar</surname> <given-names>I</given-names></name> <name><surname>Sharif</surname> <given-names>M</given-names></name> <name><surname>Raza</surname> <given-names>M</given-names></name> <name><surname>Khan</surname> <given-names>MA</given-names></name> <name><surname>Yong</surname> <given-names>HS</given-names></name></person-group>. <article-title>A decision support system for face sketch synthesis using deep learning and artificial intelligence.</article-title> <source><italic>Sensors (Basel).</italic></source> (<year>2021</year>) <volume>21</volume>(<issue>24</issue>):8178. <pub-id pub-id-type="doi">10.3390/s21248178</pub-id> <pub-id pub-id-type="pmid">8708226</pub-id>.</citation></ref>
<ref id="B10"><label>10.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Goisauf</surname> <given-names>M</given-names></name> <name><surname>Cano Abad&#x00ED;a</surname> <given-names>M</given-names></name></person-group>. <article-title>Ethics of AI in radiology: a review of ethical and societal implications.</article-title> <source><italic>Front Big Data.</italic></source> (<year>2022</year>) <volume>5</volume>:<issue>850383</issue>. <pub-id pub-id-type="doi">10.3389/fdata.2022.850383</pub-id> <pub-id pub-id-type="pmid">9329694</pub-id></citation></ref>
<ref id="B11"><label>11.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>X</given-names></name> <name><surname>Zeng</surname> <given-names>H</given-names></name> <name><surname>Lin</surname> <given-names>L</given-names></name> <name><surname>Huang</surname> <given-names>Y</given-names></name> <name><surname>Lin</surname> <given-names>H</given-names></name> <name><surname>Que</surname> <given-names>Y</given-names></name></person-group>. <article-title>Deep learning-empowered crop breeding: intelligent, efficient and promising.</article-title> <source><italic>Front Plant Sci.</italic></source> (<year>2023</year>) <volume>14</volume>:<issue>1260089</issue>. <pub-id pub-id-type="doi">10.3389/fpls.2023.1260089</pub-id> <pub-id pub-id-type="pmid">10583549</pub-id>.</citation></ref>
<ref id="B12"><label>12.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>Z</given-names></name> <name><surname>Liu</surname> <given-names>F</given-names></name> <name><surname>Yang</surname> <given-names>W</given-names></name> <name><surname>Peng</surname> <given-names>S</given-names></name> <name><surname>Zhou</surname> <given-names>JA</given-names></name></person-group>. <article-title>Survey of convolutional neural networks: analysis, applications, and prospects.</article-title> <source><italic>IEEE Trans Neural Netw Learn Syst</italic>.</source> (<year>2022</year>) <volume>33</volume>(<issue>12</issue>):<fpage>6999</fpage>&#x2013;<lpage>7019</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2021.3084827</pub-id></citation></ref>
<ref id="B13"><label>13.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sarker</surname> <given-names>IH</given-names></name></person-group>. <article-title>Machine learning: algorithms, real-world applications and research directions.</article-title> <source><italic>SN Comput Sci.</italic></source> (<year>2021</year>) <volume>2</volume>(<issue>3</issue>):<fpage>160</fpage>. <pub-id pub-id-type="doi">10.1007/s42979-021-00592-x</pub-id> <pub-id pub-id-type="pmid">7983091</pub-id>.</citation></ref>
<ref id="B14"><label>14.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Deng</surname> <given-names>Z</given-names></name> <name><surname>Qi</surname> <given-names>H</given-names></name> <name><surname>Liu</surname> <given-names>Y</given-names></name> <name><surname>Hu</surname> <given-names>E</given-names></name></person-group>. <article-title>Design and implementation of opportunity signal perception unit based on time-frequency representation and convolutional neural network.</article-title> <source><italic>Sensors (Basel).</italic></source> (<year>2021</year>) <volume>21</volume>(<issue>23</issue>):<fpage>7871</fpage>. <pub-id pub-id-type="doi">10.3390/s21237871</pub-id> <pub-id pub-id-type="pmid">8659807</pub-id>.</citation></ref>
<ref id="B15"><label>15.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Guo</surname> <given-names>C</given-names></name> <name><surname>Lin</surname> <given-names>S</given-names></name> <name><surname>Huang</surname> <given-names>Z</given-names></name> <name><surname>Yao</surname> <given-names>Y</given-names></name></person-group>. <article-title>Analysis of sentiment changes in online messages of depression patients before and during the COVID-19 epidemic based on BERT+BiLSTM.</article-title> <source><italic>Health Inf Sci Syst.</italic></source> (<year>2022</year>) <volume>10</volume>(<issue>1</issue>):<fpage>15</fpage>. <pub-id pub-id-type="doi">10.1007/s13755-022-00184-w</pub-id> <pub-id pub-id-type="pmid">9279529</pub-id>.</citation></ref>
<ref id="B16"><label>16.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Muzio</surname> <given-names>G</given-names></name> <name><surname>O&#x2019;Bray</surname> <given-names>L</given-names></name> <name><surname>Borgwardt</surname> <given-names>K</given-names></name></person-group>. <article-title>Biological network analysis with deep learning.</article-title> <source><italic>Brief Bioinform.</italic></source> (<year>2021</year>) <volume>22</volume>(<issue>2</issue>):<fpage>1515</fpage>&#x2013;<lpage>30</lpage>. <pub-id pub-id-type="doi">10.1093/bib/bbaa257</pub-id> <pub-id pub-id-type="pmid">7986589</pub-id>.</citation></ref>
<ref id="B17"><label>17.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Luo</surname> <given-names>J</given-names></name> <name><surname>Hu</surname> <given-names>D</given-names></name></person-group>. <article-title>An image classification method based on adaptive attention mechanism and feature extraction network.</article-title> <source><italic>Comput Intell Neurosci.</italic></source> (<year>2023</year>) <volume>2023</volume>:<issue>4305594</issue>. <pub-id pub-id-type="doi">10.1155/2023/4305594</pub-id> <pub-id pub-id-type="pmid">9957639</pub-id>.</citation></ref>
<ref id="B18"><label>18.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>B</given-names></name> <name><surname>Jin</surname> <given-names>J</given-names></name> <name><surname>Liu</surname> <given-names>H</given-names></name> <name><surname>Yang</surname> <given-names>Z</given-names></name> <name><surname>Zhu</surname> <given-names>H</given-names></name> <name><surname>Wang</surname> <given-names>Y</given-names></name> <etal/> </person-group> <article-title>Trends and hotspots in research on medical images with deep learning: a bibliometric analysis from 2013 to 2023.</article-title> <source><italic>Front Artif Intell.</italic></source> (<year>2023</year>) <volume>6</volume>:<issue>1289669</issue>. <pub-id pub-id-type="doi">10.3389/frai.2023.1289669</pub-id> <pub-id pub-id-type="pmid">10665961</pub-id>.</citation></ref>
<ref id="B19"><label>19.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gong</surname> <given-names>X</given-names></name> <name><surname>Ying</surname> <given-names>W</given-names></name> <name><surname>Zhong</surname> <given-names>S</given-names></name> <name><surname>Gong</surname> <given-names>S</given-names></name></person-group>. <article-title>Text sentiment analysis based on transformer and augmentation.</article-title> <source><italic>Front Psychol.</italic></source> (<year>2022</year>) <volume>13</volume>:<issue>906061</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2022.906061</pub-id> <pub-id pub-id-type="pmid">9136405</pub-id>.</citation></ref>
<ref id="B20"><label>20.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhong</surname> <given-names>X</given-names></name> <name><surname>Xu</surname> <given-names>L</given-names></name> <name><surname>Li</surname> <given-names>C</given-names></name> <name><surname>An</surname> <given-names>L</given-names></name> <name><surname>Wang</surname> <given-names>L</given-names></name></person-group>. <article-title>RFE-UNet: remote feature exploration with local learning for medical image segmentation.</article-title> <source><italic>Sensors (Basel).</italic></source> (<year>2023</year>) <volume>23</volume>(<issue>13</issue>):<fpage>6228</fpage>. <pub-id pub-id-type="doi">10.3390/s23136228</pub-id> <pub-id pub-id-type="pmid">10346146</pub-id>.</citation></ref>
<ref id="B21"><label>21.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>B-C</given-names></name> <name><surname>Chen</surname> <given-names>C-S</given-names></name> <name><surname>Hsu</surname> <given-names>WH</given-names></name></person-group>. <article-title>Face recognition and retrieval using cross-age reference coding with cross-age celebrity dataset.</article-title> <source><italic>IEEE Trans Multimed.</italic></source> (<year>2015</year>) <volume>17</volume>(<issue>6</issue>):<fpage>804</fpage>&#x2013;<lpage>15.</lpage></citation></ref>
<ref id="B22"><label>22.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zou</surname> <given-names>H</given-names></name> <name><surname>Hu</surname> <given-names>H</given-names></name></person-group>. <article-title>Cross-age face recognition using reference coding with kernel direct discriminant analysis.</article-title> <source><italic>2017 IEEE International Conference on Image Processing (ICIP)</italic></source> (<year>2017</year>).</citation></ref>
<ref id="B23"><label>23.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Deng</surname> <given-names>Y</given-names></name> <name><surname>Teng</surname> <given-names>S</given-names></name> <name><surname>Fei</surname> <given-names>L</given-names></name> <name><surname>Zhang</surname> <given-names>W</given-names></name> <name><surname>Rida</surname> <given-names>I</given-names></name></person-group>. <article-title>A multifeature learning and fusion network for facial age estimation.</article-title> <source><italic>Sensors (Basel).</italic></source> (<year>2021</year>) <volume>21</volume>(<issue>13</issue>):<fpage>4597</fpage>. <pub-id pub-id-type="doi">10.3390/s21134597</pub-id> <pub-id pub-id-type="pmid">8271811</pub-id>.</citation></ref>
<ref id="B24"><label>24.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>K-H</given-names></name> <name><surname>Liu</surname> <given-names>T-J</given-names></name></person-group>. <article-title>A structure-based human facial age estimation framework under a constrained condition.</article-title> <source><italic>IEEE Trans Image Process.</italic></source> (<year>2019</year>) <volume>28</volume>(<issue>10</issue>):<fpage>5187</fpage>&#x2013;<lpage>200.</lpage></citation></ref>
<ref id="B25"><label>25.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sharma</surname> <given-names>N</given-names></name> <name><surname>Sharma</surname> <given-names>R</given-names></name> <name><surname>Jindal</surname> <given-names>N</given-names></name></person-group>. <article-title>Prediction of face age progression with generative adversarial networks.</article-title> <source><italic>Multimed Tools Appl.</italic></source> (<year>2021</year>) <volume>80</volume>(<issue>25</issue>):<fpage>33911</fpage>&#x2013;<lpage>35</lpage>. <pub-id pub-id-type="doi">10.1007/s11042-021-11252-w</pub-id> <pub-id pub-id-type="pmid">8397612</pub-id>.</citation></ref>
<ref id="B26"><label>26.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shu</surname> <given-names>X</given-names></name> <name><surname>Xie</surname> <given-names>G-S</given-names></name> <name><surname>Li</surname> <given-names>Z</given-names></name> <name><surname>Tang</surname> <given-names>J</given-names></name></person-group>. <article-title>Age progression: current technologies and applications.</article-title> <source><italic>Neurocomputing.</italic></source> (<year>2016</year>) <volume>208</volume>:<fpage>249</fpage>&#x2013;<lpage>61.</lpage></citation></ref>
<ref id="B27"><label>27.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ricanek</surname> <given-names>K</given-names></name> <name><surname>Tesafaye</surname> <given-names>T</given-names></name></person-group>. <article-title>MORPH: a longitudinal image database of normal adult age-progression.</article-title> <source><italic>7th International Conference on Automatic Face and Gesture Recognition (FGR06)</italic></source> (<year>2006</year>).</citation></ref>
<ref id="B28"><label>28.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zeng</surname> <given-names>J</given-names></name> <name><surname>Zhao</surname> <given-names>X</given-names></name> <name><surname>Gan</surname> <given-names>J</given-names></name> <name><surname>Mai</surname> <given-names>C</given-names></name> <name><surname>Zhai</surname> <given-names>Y</given-names></name> <name><surname>Wang</surname> <given-names>F</given-names></name></person-group>. <article-title>Deep convolutional neural network used in single sample per person face recognition.</article-title> <source><italic>Comput Intell Neurosci.</italic></source> (<year>2018</year>) <volume>2018</volume>:<issue>3803627</issue>. <pub-id pub-id-type="doi">10.1155/2018/3803627</pub-id> <pub-id pub-id-type="pmid">6126063</pub-id>.</citation></ref>
<ref id="B29"><label>29.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yi</surname> <given-names>D</given-names></name> <name><surname>Lei</surname> <given-names>Z</given-names></name> <name><surname>Liao</surname> <given-names>S</given-names></name> <name><surname>Li</surname> <given-names>SZ.</given-names></name></person-group> <source><italic>Learning Face Representation from Scratch (2014-11-28)</italic></source> (<year>2014</year>).</citation></ref>
<ref id="B30"><label>30.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>S</given-names></name> <name><surname>Song</surname> <given-names>Y</given-names></name> <name><surname>Zhang</surname> <given-names>M</given-names></name> <name><surname>Zhao</surname> <given-names>J</given-names></name> <name><surname>Yang</surname> <given-names>S</given-names></name> <name><surname>Hou</surname> <given-names>K</given-names></name></person-group>. <article-title>An identity authentication method combining liveness detection and face recognition.</article-title> <source><italic>Sensors (Basel).</italic></source> (<year>2019</year>) <volume>19</volume>(<issue>21</issue>):<fpage>4733</fpage>. <pub-id pub-id-type="doi">10.3390/s19214733</pub-id> <pub-id pub-id-type="pmid">6864603</pub-id>.</citation></ref>
<ref id="B31"><label>31.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schroff</surname> <given-names>F</given-names></name> <name><surname>Kalenichenko</surname> <given-names>D</given-names></name> <name><surname>Philbin</surname> <given-names>J</given-names></name></person-group>. <article-title>FaceNet: a unified embedding for face recognition and clustering.</article-title> <source><italic>2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</italic></source> (<year>2015</year>).</citation></ref>
<ref id="B32"><label>32.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liang</surname> <given-names>B</given-names></name> <name><surname>Yang</surname> <given-names>N</given-names></name> <name><surname>He</surname> <given-names>G</given-names></name> <name><surname>Huang</surname> <given-names>P</given-names></name> <name><surname>Yang</surname> <given-names>Y</given-names></name></person-group>. <article-title>Identification of the facial features of patients with cancer: a deep learning-based pilot study.</article-title> <source><italic>J Med Internet Res.</italic></source> (<year>2020</year>) <volume>22</volume>(<issue>4</issue>):<fpage>e17234</fpage>. <pub-id pub-id-type="doi">10.2196/17234</pub-id> <pub-id pub-id-type="pmid">7221634</pub-id>.</citation></ref>
<ref id="B33"><label>33.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>Y</given-names></name> <name><surname>Liu</surname> <given-names>L</given-names></name> <name><surname>Li</surname> <given-names>C</given-names></name> <name><surname>Loy</surname> <given-names>CC.</given-names></name></person-group> <source><italic>Quantifying Facial Age by Posterior of Age Comparisons</italic></source> (<year>2017</year>). arXiv(2017-10-12).</citation></ref>
<ref id="B34"><label>34.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Alsaleh</surname> <given-names>A</given-names></name> <name><surname>Perkgoz</surname> <given-names>C</given-names></name></person-group>. <article-title>A space and time efficient convolutional neural network for age group estimation from facial images.</article-title> <source><italic>PeerJ Comput Sci.</italic></source> (<year>2023</year>) <volume>9</volume>:<issue>e1395</issue>. <pub-id pub-id-type="doi">10.7717/peerj-cs.1395</pub-id> <pub-id pub-id-type="pmid">10280577</pub-id>.</citation></ref>
<ref id="B35"><label>35.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sajid</surname> <given-names>M</given-names></name> <name><surname>Taj</surname> <given-names>IA</given-names></name> <name><surname>Bajwa</surname> <given-names>UI</given-names></name> <name><surname>Ratyal</surname> <given-names>NI</given-names></name></person-group>. <article-title>Facial asymmetry-based age group estimation: role in recognizing age-separated face images.</article-title> <source><italic>J Forensic Sci</italic>.</source> (<year>2018</year>) <volume>63</volume>(<issue>6</issue>):<fpage>1727</fpage>&#x2013;<lpage>49</lpage>. <pub-id pub-id-type="doi">10.1111/1556-4029.13798</pub-id></citation></ref>
<ref id="B36"><label>36.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sheoran</surname> <given-names>V</given-names></name> <name><surname>Joshi</surname> <given-names>S</given-names></name> <name><surname>Bhayani</surname> <given-names>TR</given-names></name></person-group>. <article-title>Age and gender prediction using deep CNNs and transfer learning.</article-title> In: <person-group person-group-type="editor"><name><surname>Singh</surname> <given-names>SK</given-names></name> <name><surname>Roy</surname> <given-names>P</given-names></name> <name><surname>Raman</surname> <given-names>B</given-names></name> <name><surname>Nagabhushan</surname> <given-names>P</given-names></name></person-group>, <role>editors.</role> <source><italic>Computer Vision and Image Processing.</italic></source> <publisher-loc>Singapore</publisher-loc>: <publisher-name>Springer</publisher-name> (<year>2021</year>). p. <fpage>293</fpage>&#x2013;<lpage>304.</lpage></citation></ref>
<ref id="B37"><label>37.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yulan</surname> <given-names>D.</given-names></name></person-group> <source><italic>Research on Deep Learning and Evaluation of Age Features from Face Images.</italic></source> <publisher-name>Guangdong University of Technology</publisher-name> (<year>2023</year>).</citation></ref>
<ref id="B38"><label>38.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>K.</given-names></name></person-group> <source><italic>Research on Age Prediction Method Based on the Combination of Local Facial Features and Global Facial Features.</italic></source> <publisher-name>Qilu University of Technology</publisher-name> (<year>2024</year>).</citation></ref>
<ref id="B39"><label>39.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sheng</surname> <given-names>M.</given-names></name></person-group> <source><italic>Research on Cross-age Face Recognition Based on Generative Adversarial Networks.</italic></source> <publisher-name>Jiangsu University</publisher-name> (<year>2022</year>).</citation></ref>
<ref id="B40"><label>40.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>MM</given-names></name> <name><surname>Di</surname> <given-names>WJ</given-names></name> <name><surname>Song</surname> <given-names>T</given-names></name> <name><surname>Yin</surname> <given-names>NB</given-names></name> <name><surname>Wang</surname> <given-names>YQ</given-names></name></person-group>. <article-title>Exploring artificial intelligence from a clinical perspective: a comparison and application analysis of two facial age predictors trained on a large-scale Chinese cosmetic patient database.</article-title> <source><italic>Skin Res Technol.</italic></source> (<year>2023</year>) <volume>29</volume>(<issue>7</issue>):<fpage>e13402</fpage>. <pub-id pub-id-type="doi">10.1111/srt.13402</pub-id> <pub-id pub-id-type="pmid">10308065</pub-id>.</citation></ref>
<ref id="B41"><label>41.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shi</surname> <given-names>C</given-names></name> <name><surname>Zhao</surname> <given-names>S</given-names></name> <name><surname>Zhang</surname> <given-names>K</given-names></name> <name><surname>Wang</surname> <given-names>Y</given-names></name> <name><surname>Liang</surname> <given-names>L</given-names></name></person-group>. <article-title>Face-based age estimation using improved Swin Transformer with attention-based convolution.</article-title> <source><italic>Front Neurosci.</italic></source> (<year>2023</year>) <volume>17</volume>:<issue>1136934</issue>. <pub-id pub-id-type="doi">10.3389/fnins.2023.1136934</pub-id> <pub-id pub-id-type="pmid">10130448</pub-id>.</citation></ref>
<ref id="B42"><label>42.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>B</given-names></name> <name><surname>Bao</surname> <given-names>Y</given-names></name></person-group>. <article-title>Age estimation of faces in videos using head pose estimation and convolutional neural networks.</article-title> <source><italic>Sensors (Basel).</italic></source> (<year>2022</year>) <volume>22</volume>(<issue>11</issue>):4171. <pub-id pub-id-type="doi">10.3390/s22114171</pub-id> <pub-id pub-id-type="pmid">9185429</pub-id>.</citation></ref>
<ref id="B43"><label>43.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huynh</surname> <given-names>HT</given-names></name> <name><surname>Nguyen</surname> <given-names>H</given-names></name></person-group>. <article-title>Joint age estimation and gender classification of asian faces using wide ResNet.</article-title> <source><italic>SN Comput Sci</italic>.</source> (<year>2020</year>) <volume>1</volume>(<issue>5</issue>):<fpage>284</fpage>. <pub-id pub-id-type="doi">10.1007/s42979-020-00294-w</pub-id> <pub-id pub-id-type="pmid">7451232</pub-id>.</citation></ref>
<ref id="B44"><label>44.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Han</surname> <given-names>S</given-names></name> <name><surname>Guo</surname> <given-names>Y</given-names></name> <name><surname>Zhou</surname> <given-names>X</given-names></name> <name><surname>Huang</surname> <given-names>J</given-names></name> <name><surname>Shen</surname> <given-names>L</given-names></name> <name><surname>Luo</surname> <given-names>Y</given-names></name></person-group>. <article-title>A Chinese face dataset with dynamic expressions and diverse ages synthesized by deep learning.</article-title> <source><italic>Sci Data.</italic></source> (<year>2023</year>) <volume>10</volume>(<issue>1</issue>):<fpage>878</fpage>. <pub-id pub-id-type="doi">10.1038/s41597-023-02701-2</pub-id> <pub-id pub-id-type="pmid">10703811</pub-id>.</citation></ref>
<ref id="B45"><label>45.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>W</given-names></name> <name><surname>Yan</surname> <given-names>Y</given-names></name> <name><surname>Cui</surname> <given-names>Z</given-names></name> <name><surname>Feng</surname> <given-names>J</given-names></name> <name><surname>Yan</surname> <given-names>S</given-names></name> <name><surname>Sebe</surname> <given-names>N</given-names></name></person-group>. <article-title>recurrent face aging with hierarchical autoregressive memory.</article-title> <source><italic>IEEE Trans Pattern Anal Mach Intell</italic>.</source> (<year>2019</year>) <volume>41</volume>(<issue>3</issue>):<fpage>654</fpage>&#x2013;<lpage>68</lpage>. <pub-id pub-id-type="doi">10.1109/TPAMI.2018.2803166</pub-id></citation></ref>
<ref id="B46"><label>46.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>H</given-names></name> <name><surname>Huang</surname> <given-names>D</given-names></name> <name><surname>Wang</surname> <given-names>Y</given-names></name> <name><surname>Jain</surname> <given-names>AK</given-names></name></person-group>. <article-title>Learning continuous face age progression: a pyramid of GANs.</article-title> <source><italic>IEEE Trans Pattern Anal Mach Intell</italic>.</source> (<year>2021</year>) <volume>43</volume>(<issue>2</issue>):<fpage>499</fpage>&#x2013;<lpage>515</lpage>. <pub-id pub-id-type="doi">10.1109/TPAMI.2019.2930985</pub-id></citation></ref>
<ref id="B47"><label>47.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>J</given-names></name> <name><surname>Zhou</surname> <given-names>K</given-names></name> <name><surname>Luximon</surname> <given-names>Y</given-names></name> <name><surname>Lee</surname> <given-names>TY</given-names></name> <name><surname>Li</surname> <given-names>P</given-names></name></person-group>. <article-title>MeshWGAN: mesh-to-mesh Wasserstein GAN with multi-task gradient penalty for 3D facial geometric age transformation.</article-title> <source><italic>IEEE Trans Vis Comput Graph</italic>.</source> (<year>2024</year>) <volume>30</volume>(<issue>8</issue>):<fpage>4927</fpage>&#x2013;<lpage>40</lpage>. <pub-id pub-id-type="doi">10.1109/TVCG.2023.3284500</pub-id></citation></ref>
<ref id="B48"><label>48.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Antipov</surname> <given-names>G</given-names></name> <name><surname>Baccouche</surname> <given-names>M</given-names></name> <name><surname>Dugelay</surname> <given-names>J-L.</given-names></name></person-group> <source><italic>Face Aging With Conditional Generative Adversarial Networks</italic></source> (<year>2017</year>).</citation></ref>
<ref id="B49"><label>49.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>BoWen</surname> <given-names>W.</given-names></name></person-group> <source><italic>Research on Improvement and Application of Generative Adversarial Network.</italic></source> <publisher-name>University of Electronic Science and Technology of China</publisher-name> (<year>2022</year>).</citation></ref>
<ref id="B50"><label>50.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>Z.</given-names></name></person-group> <source><italic>Research on Face Aging Algorithm Based on Generative Adversarial Network.</italic></source> <publisher-name>Northwest University</publisher-name> (<year>2023</year>).</citation></ref>
<ref id="B51"><label>51.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Qiujian</surname> <given-names>B.</given-names></name></person-group> <source><italic>Research on Face Aging Based on Deep Learning.</italic></source> <publisher-name>University of Electronic Science and Technology of China</publisher-name> (<year>2019</year>).</citation></ref>
<ref id="B52"><label>52.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xia</surname> <given-names>X</given-names></name> <name><surname>Chen</surname> <given-names>X</given-names></name> <name><surname>Wu</surname> <given-names>G</given-names></name> <name><surname>Li</surname> <given-names>F</given-names></name> <name><surname>Wang</surname> <given-names>Y</given-names></name> <name><surname>Chen</surname> <given-names>Y</given-names></name> <etal/> </person-group> <article-title>Three-dimensional facial-image analysis to predict heterogeneity of the human ageing rate and the impact of lifestyle.</article-title> <source><italic>Nat Metab</italic>.</source> (<year>2020</year>) <volume>2</volume>(<issue>9</issue>):<fpage>946</fpage>&#x2013;<lpage>57</lpage>. <pub-id pub-id-type="doi">10.1038/s42255-020-00270-x</pub-id></citation></ref>
</ref-list>
</back>
</article>
